EOS RPO

Senior Software Engineer-Java ,Apache Spark

Posted Apr 17, 2026
Project ID: R-521619
Location
Hyderabad, Telangana
Hours/week
40 hrs/week

In this role, you will:

  • Lead moderately complex initiatives and deliverables within technical domain environments

  • Contribute to large scale planning of strategies

  • Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments

  • Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures

  • Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements

  • Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals

  • Lead projects and act as an escalation point, provide guidance and direction to less experienced staff

Required Qualifications:

  • 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education

Desired Qualifications:

  • Experience in Software Engineering, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education

  • Experience in the Cloud technologies like OpenShift /PCF etc.

  • Proficiency in developing web-based applications, should have experience in UI and Server technologies (ANGULAR / React, Spring Framework, JDBC, JavaBeans).

  • Experience in developing client-side UI components using Angular / React.

  • Experience in developing the functionalities, enhancements and bug fixes for application (UI/Backend) by consuming and exposing services.

  • Designing, implementing and maintaining Java services with well-designed, efficient and test driven code.

  • Experience in Developing Restful Web services and Micro services in java by using Spring boot.

  • Develop service/business layer components using Spring and EJBS.

  • Should be able to design and develop MVC restful services using JAVA spring boot (MVC) with Oracle/SQL Server Database.

  • Strong knowledge of Junit/TestNG, Selenium frameworks.

  • Define and implement cloud-native architectures leveraging GCP services (Big Query, Iceberg, Cloud Storage, Kubernetes Engine, Cloud Functions, Pub/Sub, Feature Store etc.).

  • Design cross platform consumption patterns, microservices, and event-driven architectures for high availability and scalability.

  • Expertise in GCP services: Big Query, Iceberg, Cloud Storage, Kubernetes Engine,  Apache  spark flow, Airflow

  • Strong programming skills in Python, Java, and SQL.

  • Strong experience building big data pipelines using Apache Spark, Hive, Hadoop.

  • Experience with Autosys/Airflow or similar orchestration tools

  • Working knowledge of REST APIs, Object Storage, Dremio, and CI/CD pipelines

  • Cloud-native engineering experience — serverless, managed Spark, event-driven architectures.

  • Familiarity with containerization (Docker, K8s) and workflow operators.

  • Strong experience implementing test automation for data pipelines (unit, contract, integration tests).

Job Expectations:

  • Involve in end –end lifecycle of Product/Application development, analyze highly complex business requirements, designs and writes technical specifications to design or redesign complex modules and applications.

  • Develop highly complex original code and provide coding direction to junior team members

  • Proficiency with Agile, DevOps practices, delivering Cloud solutions. Experience with delivering projects using Agile software development techniques.

  • Advanced knowledge of object oriented design and development (OOA/OOD) and the JAVA patterns and practices

  • Must possess innovative and Out-of-box thinking while developing advanced technical solutions to business problems and grab opportunities to improve system resiliency.

  • Collaborate with cross-functional teams to build scalable, high‑performance data solutions using Python, SQL, Spark, Iceberg, Dremio, and Autosys.

  • Design, build, test, deploy, and maintain large-scale structured and unstructured data pipelines using Python, SQL, Apache Spark, and modern data lake/lakehouse technologies.

  • Work with open table formats such as Iceberg,

  • Compute: Spark on K8s / Openshift

  • Query/Analytics: Dremio

  • Orchestration: Airflow on Kubernetes

  • Storage: Iceberg, S3/NetApp strage

  • Messaging/Search: Kafka


Similar jobs

+ Search all jobs