Job Description
Insight Global is hiring a Sr. Data Engineer in Calgary, AB. This role is hybrid, 4 days a week in office, 1 day work from home. This resource will play a key role in architecting and optimizing a modern, scalable, and productized data ecosystem that enables internal teams and external partners to derive maximum value from our data assets. Responsible for designing high-performance data pipelines, storage solutions, and API-driven data access mechanisms that support real-time decision-making and innovation. The ideal candidate has 5+ years of experience in data engineering, with modern data storage and processing, proficiency in ETL frameworks and event-driven architectures. They will also have strong API development skills and proficiency in Go, Java and/or Python.
Duties & Responsibilities:
- Design and build a scalable, cloud-native data platform aligned with microservices
- Develop real-time and batch data pipelines to power data-driven products
- Implement SQL, NoSQL, and hybrid storage strategies
Data as a Product
- Enable self-serve data access with secure, well-documented data APIs.
- Collaborate with Product & Business teams to define and optimize data products.
- Ensure data quality, lineage, and governance in all data pipelines and products. Microservices
Integration & Performance
- Build event-driven architectures using Kafka, Azure Event Hub, or Service Bus.
- Develop scalable ETL/ELT processes for ingestion, transformation, and distribution.
- Optimize query performance, indexing, and caching for data-intensive apps. Data Governance,
Security, and Compliance
- Enforce data privacy, security, and access controls aligned with compliance standards.
- Implement observability and monitoring for data infrastructure and pipelines.
- Work with DevSecOps teams to integrate security into CI/CD workflows.
Required Skills & Experience
- 5+ years of experience in data engineering, with exposure to Data Mesh and Data as a Product preferred.
- Expertise in modern data storage and processing, including SQL, NoSQL (Cosmos DB, PostgreSQL), Data Lakes (Azure Data Lake, Delta Lake, Apache Iceberg).
- Proficiency in ETL frameworks (e.g. Apache Kafka, Airflow, Flink, Spark, Azure Data Factory, Databricks).
- Experience with Event-driven architectures using Queues, Pub/Sub services (e.g. Azure Service Bus, Azure Event Grid, Amazon Event Bridge) and Containerized Environments (Container Apps, AWS ECS).
- Experience with Apache and/or Azure data platforms or similar, e.g. Fabric, Databricks, Snowflake, and Apache Hudi. Strong API development skills using GraphQL, REST, and/or gRPC for enabling data as a product.
- Proficiency in Go, Java, and/or Python.
- Deep understanding of data governance, security, lineage, and compliance using Microsoft Purview, OpenLineage, Apache Ranger, or Azure Key Vault.
- Experience with Infrastructure as Code (IaC) using Bicep, Terraform, or CloudFormation for managing cloud-based data solutions.
- Strong problem-solving and collaboration skills, working across data, engineering, and business teams.