- Company Name
- University of California, San Francisco
- Job Title
- Machine Learning Engineer
- Job Description
-
**Job Title:** Machine Learning Engineer
**Role Summary:**
Design, implement, and maintain scalable data pipelines and infrastructure to support the production, monitoring, and continuous improvement of machine‑learning and generative‑AI models on a cloud platform. Collaborate with research teams to integrate Epic EHR data, deploy AI/ML tools, and define performance metrics for clinical deployment.
**Expectations:**
- 5+ years of experience building and maintaining AI/ML production pipelines.
- Proven expertise in MLOps, CI/CD, and cloud‑native architecture.
- Strong command of Python, SQL, and ML libraries (scikit‑learn, PyTorch, etc.).
- Deep understanding of Epic Clarity/Caboodle data models and ability to obtain Epic data model certification.
- Demonstrated ability to translate research prototypes into production‑ready solutions and to communicate technical concepts to diverse stakeholders.
**Key Responsibilities:**
- Develop and enhance ETL processes for integrating Epic and other data sources into the HIPAC platform.
- Design, build, and sustain production‑grade ML model pipelines (training, inference, monitoring).
- Implement CI/CD workflows for code, model, and data versioning; automate testing and validation.
- Define and monitor metrics to assess model performance, drift, and safety in a clinical setting.
- Collaborate with data scientists, clinicians, and IT teams to translate research requirements into scalable engineering solutions.
- Maintain documentation, guidelines, and best practices for data handling, model lifecycle, and security compliance.
- Participate in project planning, budgeting discussions, and regular stakeholder updates.
**Required Skills:**
- Advanced Python programming (clean, production‑ready code).
- Advanced SQL proficiency (SQL Server, PostgreSQL).
- Experience with ML libraries (Jupyter, Pandas, scikit‑learn, Numpy/Scipy, PyTorch).
- Strong knowledge of MLOps, DevOps, and CI/CD tools (e.g., Git, Jenkins, Docker, Kubernetes, MLflow).
- Expertise in data warehousing, ETL pipeline design, and cloud architecture (AWS, GCP, Azure).
- Automated testing, monitoring, and logging of ML models.
- Excellent communication, problem‑solving, and project leadership.
- Ability to work independently and as part of a collaborative team.
**Required Education & Certifications:**
- Bachelor’s degree in Computer Science, Computer Engineering, or related field (or equivalent professional experience).
- Epic Clarity/Caboodle data model certification (obtained post‑onboarding or ability to obtain).
San francisco, United states
On site
Mid level
24-02-2026