Responsibilities:
- Azure Databricks & Data Factory: Utilize Azure Databricks and Data Factory to design, develop, and maintain data pipelines.
- Data Lake Integration: Implement solutions for pushing data to data lakes, ensuring efficient data storage and retrieval.
- Data Migration: Manage and execute data migration projects, ensuring data integrity and minimal downtime.
- Project Delivery: Independently work on and deliver projects, meeting deadlines and quality standards.
- Collaboration: Understand complex data systems by working closely with engineering and product teams.
- Application Development: Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake, or other data storage locations.
Qualifications:
- Experience: 8-10 years of strong hands-on experience in Azure Data Factory, Azure Databricks, and PySpark development.
- Skills: Proficiency in SQL Server and Hadoop Data Lake. Experience with Terraform is a plus.
- Analytical Skills: Strong analytical skills to visualize and approach complex data problems.
- Understanding: Ability to demonstrate a deep understanding of data engineering concepts and best practices.
- Data Migration: Experience working on multiple data migration projects.
Additional Skills:
- Problem-Solving: Excellent problem-solving skills and the ability to think critically.
- Communication: Strong verbal and written communication skills.
- Customer Focus: Ability to work with customers to understand their fundamental needs and provide effective solutions.