Typical Duties:
- Work closely with the Enterprise Analytics team to create and maintain ELT processes;
- Assemble large, complex datasets that meet functional / non-functional business requirements;
- Consulting DT&S and Business leaders on data and information management practices and governance;
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.;
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and other technologies;
- Work with stakeholders including the Executive, to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our business.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Championing efforts to improve business performance through enterprise information capabilities, such as master data management (MDM), metadata management, analytics, content management, data integration, and related data management or data infrastructure;
- Provide insight into the changing database integration, storage and utilization requirements for the company and offer suggestions for solutions;
- Monitor and understand Information Management trends and emerging technologies.
Qualifications:
Degree or Diploma in Computer Science, Engineering, Data Sciences
Minimum of 5 years of recent (in last 7 years) in modern data management principles such as, but not limited to ETL, practical data design, architecture, management, modelling, quality, and analytics experience.
Minimum 5 years' recent experience in ETL, data design, data architecture, data management, and data modeling
SQL Server & SSIS: Expert proficiency with SQL Server (on-premises), including stored procedures, and SSIS package-level deployment.
Data Pipelines: Proven experience designing, creating, and maintaining robust data pipelines and ETL processes.
Monitoring: Skilled in monitoring and troubleshooting database issues to ensure compliance with policies and regulations
Python for ETL: Advanced Python skills applied to developing ETL processes following software development best practices (including automated testing and code reviews).
Big Data Tools: Proficient in leveraging big data technologies, including PySpark and SparkSQL for large-scale data processing.
Cloud Expertise: Hands-on experience with cloud-based platforms such as Databricks, Azure Data Factory, and Azure Data Lake.
Lakehouse Architecture: Knowledgeable in implementing lakehouse architectures using Delta format and optimization strategies.
API Integration: Experience working with external third-party APIs as ETL sources, including Microsoft Graph APIs to integrate and automate tasks across Microsoft services.
Automation & Deployment: Familiar with CI/CD processes and tools—including Databricks asset bundles (DABs) for managing workflows—and proficient with version control systems (e.g., Git) for ETL deployments
Understanding of data management principles as outlined by DAMA/DMBoK
Ability to provide insights on evolving database integration, storage, and utilization needs
Overseeing integration from legacy/on-premises systems to new solutions
Clear communication of technical information and the ability to train/support staff
Knowledge of data privacy and confidentiality regulations (e.g., PIPEDA)
Relevant job experience in North America