Java (Kafka) Developer
班加羅爾, 印度| 柯枝, 印度
职位描述
Job Description Summary
We are seeking an experienced Java Engineer to actively contribute to the development of existing data standardisation tools and, over time, design and productionise new iteration by leveraging internal expertise and AI-driven capabilities.
You will work as part of a globally distributed team, building new microservices and improving current ones. The position offers exposure to a broad technology stack, including Java, Scala, Python, Elastic, Kafka, Spark, Iceberg, and Snowflake, all supported by market-leading AI tools.
As a key contributor, you will help shape the next generation of data standardisation solutions for a global leader in life sciences. Success in this role requires a proactive approach, a willingness to tackle complex problems independently, and the ability to leverage cutting-edge AI technologies.
Key Responsibilities.
- Design, develop, and maintain Java-based microservices for data processing.
- Collaborate on data standardisation workflows and integrate AI/ML-driven enrichment features.
- Support workflow automation and role-based security for distributed systems.
- Write high-quality code, follow best practices, and participate in peer reviews.
- Develop and maintain unit, integration, and automated tests.
- Keep documentation for services and integrations clear and up to date.
- Take ownership of understanding business problems and designing effective solutions.
- Use AI-powered development tools to improve efficiency and code quality.
- Stay up to date with emerging AI trends and their impact on software engineering.
Qualifications.
Education.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, Software Engineering, or a related field.
Experience.
- 5+ years of professional experience in software development, with a significant focus on Java.
- Experience with Spark and Kafka, or a strong willingness to learn.
- Hands-on experience with microservices architecture and cloud platforms (Azure preferred).
- Familiarity with Agile methodologies.
Technical Skills.
Mandatory.
- Strong proficiency in Java.
- Knowledge and experience with Kafka and Spark.
- Experience with containerisation tools such as Docker.
- Experience with CI/CD pipelines and DevOps practices.
- Proficient in version control systems such as Git (GitLab Ultimate).
Nice to have.
- Proficiency with Postgres SQL / Hadoop Stack / Oracle.
- Experience with Scala / Python.
- Familiarity with application-level Elasticsearch implementation and querying.
- Experience with Snowflake / Databricks.
- Exposure to AI-driven data solutions and ML-based data enrichment.
- Familiarity with cloud platforms (Azure / AWS).
- Knowledge of role-based security models and workflow automation in distributed systems.
- Experience integrating heterogeneous data sources (Hive, MSSQL, Oracle, PostgreSQL, Azure Databricks).
- Understanding of search optimisation and performance tuning for large-scale data systems.
Soft Skills.
- Demonstrated ability to work independently, proactively identifying and solving problems.
- Effective in agile environments and cross-functional teams.
- Strong problem-solving, analytical, and debugging abilities.
- Good communication skills for collaboration across distributed teams.
- Comfortable adapting to rapid changes in tools and workflows driven by AI and automation.
IQVIA is a leading global provider of clinical research services, commercial insights and healthcare intelligence to the life sciences and healthcare industries. We create intelligent connections to accelerate the development and commercialization of innovative medical treatments to help improve patient outcomes and population health worldwide. Learn more at https://jobs.iqvia.com