Closed
Our client in the Consulting sector based in Calgary is seeking a Senior Data Engineer (AWS) with a minimum of five years of relevant experience. This is an urgent job opening that commences on March 17th and extends through the end of the year, with a significant likelihood of a one-year extension. Preference will be given to local candidates; however, those willing to cover their relocation costs are also encouraged to apply.
Key Responsibilities:
• Design, develop, and maintain scalable ETL pipelines using AWS services such as Glue, Lambda, S3, and Redshift
• Develop and optimize complex SQL queries for data transformation, validation, and reporting
• Utilize Python for data processing, automation, and scripting within ETL workflows
• Work with Databricks to develop, optimize, and scale data pipelines and analytics solutions
• Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and ensure data availability and integrity
• Implement and optimize data storage solutions, including data lakes and data warehouses, to support analytics and reporting
• Monitor and troubleshoot data pipelines to ensure continuous and reliable data flow
• Ensure data security and compliance with industry standards and best practices
• Automate data integration and processing tasks using AWS tools like Step Functions, CloudFormation, and others
• Document data flows, data models, and processes for internal use and future reference
• Stay updated with the latest trends and best practices in data engineering and AWS technologies
What you will get in return:
- Professional and supportive work environment
- Excellent compensation - company bonus and onsite vehicle. Drug test is going to be performed
What you need to do now:
- Apply online or visit jobs.talencity.com
- We thank all applicants but due to the volume of applications, we will only be in touch with qualified candidates to discuss next steps.
Required Skills / Experience
• Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. A Master's degree is a plus
• 5+ years of experience in data engineering
• Extensive hands-on experience working with Databricks, including the development, optimization, and management of data pipelines
• Strong experience with AWS services such as S3, Glue, Lambda, Redshift, DynamoDB, and Athena
• Proficiency in programming languages such as Python, Java, or Scala
• Experience with SQL and NoSQL databases
• Solid understanding of data warehousing principles, ETL workflows, and data modeling techniques
• Knowledge of data security, governance, and compliance best practices
• Strong problem-solving skills and attention to detail
• Excellent communication skills, with the ability to collaborate effectively with cross-functional teams
Nice to Have:
• AWS Certified Data Engineer, or AWS Certified Data Analytics, or AWS Certified Solutions Architect
• Experience with big data tools and technologies like Apache Spark, Hadoop, and Kafka
• Knowledge of CI/CD pipelines and automation tools such as Jenkins or GitLab CI