Job Title or Location
RECENT SEARCHES

ML Engineer

AssistIQ
Toronto, ON
Posted yesterday
Job Details:
Full-time
Experienced
Benefits:
Health Insurance
Flexible Work

About Us:

At AssistIQ we are dedicated to creating a more efficient and transparent healthcare supply chain by fixing one of the core problems - providers lack accurate data and insights on their supply and implant usage. Our AI-driven software solution provides highly accurate, seamless capture of supply and implant usage in real-time, and generates actionable insights to healthcare systems, enabling better revenue capture and reduced waste, ultimately leading to better value of care and better outcomes for patients.

About the Role:

In the role of ML Engineer, you'll transform prototypes and experimental models developed by Data Scientists into scalable, maintainable, and production-ready Python applications. You'll also integrate these solutions with databases and cloud services, as well as with the UX interface, ensuring performance, reliability, and alignment with broader engineering standards. You will leverage best practices, supporting the development of a scalable implementation model that serves our customers.

Your ultimate goal is to deliver stable and successful solutions to our customers.

We're excited by candidates who enjoy and are capable of working in a fast-paced entrepreneurial environment. To be successful, you'll need to combine strong Python development skills with a pragmatic understanding of how to turn experimental code into robust, scalable, and cloud-ready applications. Equally important, you'll thrive in close collaboration with data scientists, engineers, product managers, and other cross-functional team members.

Given the nature of startup life, this role is dynamic with priorities evolving regularly and with strong delivery commitments.

Responsibilities:

  • Productionize machine learning models and data science workflows
    • Translate Jupyter notebooks code into clean, modular Python code
    • Proficiency in developing and debugging code within Jupyter Notebooks
    • Refactor and optimize algorithms for efficiency, scalability, and maintainability
    • Package models into deployable components (e.g., Docker containers, Python packages)
    • Implement model inference pipelines and batch or streaming prediction jobs
    • Monitor and troubleshoot performance of models in production
    • Collaborate with Data Scientists to validate model behavior and output post-deployment
    • Communicate with various customer stakeholders with project updates throughout the implementation process
    • Identify and escalate potential risks to the implementation timeline in a timely manner
  • Develop and maintain backend Python services and APIs
    • Design, build and maintain a Python-based framework that enables repeatable development and seamless deployment of machine learning models and advanced analytics solutions
    • Handle input validation, data preprocessing, and result formatting in services
    • Write automated tests to ensure code reliability and reproducibility
    • Integrate logging, exception handling, and versioning in deployed services
    • Manage dependency configuration and environment setup for deployments
    • Optimize response time and throughput for model-serving endpoints
  • Collaborate on cloud and database integration for solid and scalable deployment
    • Interface with cloud platforms (e.g. AWS, GCP) for deployment and storage
    • Work with relational and non-relational databases (e.g. PostgreSQL, BigQuery, etc.)
    • Implement data ingestion and feature retrieval pipelines
    • Ensure secure and compliant access to sensitive data
    • Contribute to development workflows for seamless deployment and updates

Requirements

Requirements:

  • 3+ years of hands-on Python programming experience, with strong knowledge of software engineering best practices
  • Proven experience turning data science prototypes into production-grade code and services
  • 2+ years of deploying and supporting Python based ML workloads and ETL data pipelines
  • Familiarity with machine learning concepts and workflows, even if not building models from scratch
  • Experience deploying and maintaining applications in cloud environments (e.g., AWS, GCP, Azure)
  • Experience or knowledge with Apache Airflow or similar workflow orchestration tools for building and managing data and ML pipelines.
  • Solid understanding of database technologies, including both SQL and NoSQL systems
  • Proficiency with development tools such as Git, Docker, Makefiles, virtual environments, and testing frameworks
  • Ability to build and document modular, reusable, and testable code for long-term maintainability
  • Strong problem-solving mindset, with the ability to work independently
  • Ability to adapt quickly and switch between tasks or priorities in a fast-paced, dynamic start-up environment.
  • Excellent communication and collaboration skills, with a willingness to work closely with Data Scientists, Engineers, and Product teams

Benefits

  • Health insurance
  • Business travel when needed
  • 3 weeks of vacation
  • 10 sick days
  • Flexible work hours
  • Hybrid in Toronto or Montreal

Share This Job: