In this role, you will
- Lead the design, development, and maintenance of scalable and robust reusable data pipelines.
- Architect and implement data migration, data warehousing solutions and ETL/ELT projects.
- Champion the transformation and consolidation of legacy ETL tools.
- Ensure data quality, integrity, and security across various data systems.
- Drive data-driven decision-making and innovation across high-impact projects.
- Collaborate with the QA team to develop test strategies and automated test scripts.
- Oversee all aspects of the data delivery life cycle from discovery, analysis, design, development, testing to release planning and implementation of data systems.
- Guide and collaborate data developers and engineers to adhere to CI/CD and DevOps best practices.
- Partner with the Data Quality Assurance Team to ensure the highest quality standards are applied to every process within data engineering.
- Translate requirements into detailed functional and technical designs.
- Provide consultation for evaluating data and software systems.
- Ability to manage multiple competing projects and prioritize them effectively.
- Lead and collaborate with vendors as needed.
What we're looking for
- Bachelor's Degree in a field such as Engineering, Computer Science or equivalent
- Expert-level knowledge of SQL, Python, and Shell scripting.
- Expert-level knowledge of Snowflake and Microsoft SQL Server
- Experience with Node.js. and .Net
- Expert-level knowledge of ETL and API development.
- Extensive experience developing and automating CI/CD pipelines.
- Expert-level knowledge of DBT for designing, developing, and maintaining data pipelines.
- Expert-level knowledge in data modeling and data architecture.
- Experience with Agile Scrum, Kanban, Iterative Data Development and PACE methodologies
Nice To Have:
- Experience with AWS EC2, S3, ECS, Glue and Kinesis
- Experience with DevOps tools such as GIT, Docker, Jenkins, and Octopus.
- Knowledge of Terraform for designing and implementing infrastructure solutions.