Job Title or Location

Data Engineer

Iris Software Inc. - 32 Jobs
Toronto, ON
Posted yesterday
Job Details:
Full-time
Experienced

Greetings!

We are seeking a highly skilled Data Engineer with expertise in PySpark, Databricks, and Apache Airflow to join our dynamic data team. In this role, you will design, build, and optimize scalable data pipelines, ensuring the efficient flow of information across our systems. Strong communication skills are critical, as you will collaborate closely with cross-functional teams including Data Scientists, Analysts, and Product Managers.

Job Title: Data Engineer

Location: Toronto ON (Hybrid 3 days onsite per week)

Key Responsibilities:

  • Design, develop, and maintain large-scale data processing pipelines using PySpark and Databricks.
  • Orchestrate complex workflows and data pipelines using Apache Airflow.
  • Optimize and troubleshoot existing ETL processes to ensure data quality and system performance.
  • Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver appropriate solutions.
  • Ensure compliance with data governance and security standards.
  • Write clean, maintainable, and well-documented code.
  • Communicate technical concepts clearly to both technical and non-technical audiences.

Required Qualifications:

  • Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related field.
  • 7+ years of experience in data engineering or a related field.
  • Expertise in PySpark and Apache Spark.
  • Hands-on experience with Databricks and its ecosystem.
  • Strong working knowledge of Apache Airflow for workflow orchestration.
  • Proficiency with SQL and experience working with large datasets.
  • Experience with cloud platforms (AWS, Azure, or GCP) is a plus.
  • Strong problem-solving skills and attention to detail.
  • Excellent written and verbal communication skills.

Preferred Qualifications:

  • Experience in building real-time data pipelines.
  • Knowledge of Delta Lake and other big data storage formats.
  • Exposure to CI/CD tools and DevOps practices for data engineering.
  • Familiarity with data modeling and data warehousing principles.

Best Regards

Share This Job: