Job Summary:
We are seeking a highly skilled Data Engineer with strong expertise in Java, Apache Kafka, and Apache Flink. Experience in the banking or financial services domain is highly desirable. The ideal candidate will play a key role in building and optimizing data pipelines and real-time streaming platforms that power critical financial applications.
Key Responsibilities:
Design, build, and maintain scalable and robust data pipelines using Java, Kafka, and Flink.
Develop real-time data processing systems for streaming analytics.
Collaborate with data scientists, analysts, and software engineers to deliver high-quality data solutions.
Ensure data accuracy, quality, and consistency across systems.
Monitor and troubleshoot performance bottlenecks in data pipelines.
Work in an Agile environment and participate in sprint planning, code reviews, and daily stand-ups.
Required Skills:
Strong programming skills in Java.
Extensive experience with Apache Kafka (Kafka Streams, Connect, etc.).
Proficient in Apache Flink for stream processing.
Solid understanding of distributed systems and event-driven architectures.
Familiarity with data modeling, data warehousing, and ETL processes.
Experience with version control systems (e.g., Git) and CI/CD pipelines.
Preferred Qualifications:
Experience in the banking or finance domain (e.g., payment systems, trading platforms, fraud detection).
Familiarity with cloud platforms (AWS, GCP, or Azure).
Knowledge of other big data technologies such as Spark, Hadoop, Hive, or Presto is a plus.
Bachelor's or Master's degree in Computer Science, Engineering, or related field.