Job Specifications
500M+ downloads. 77M+ monthly users. A decade of building – and we’re still accelerating.
Flo is the world’s #1 health & fitness app worldwide on a mission to build a better future for female health. Backed by a $200M investment led by General Atlantic, we became the first product of our kind to reach a $1B valuation in 2024 – and we’re not slowing down.
With 6M paid subscribers and the highest-rated experience in the App Store’s health category, we’ve spent 10 years earning trust at scale. Now, we’re building the next generation of digital health – AI-powered, privacy-first, clinically backed – to help our users know their body better.
The job
We are looking for an AI/ML Platform Engineer to join the AI Platform team. This team builds and maintains Flo's shared platform for artificial intelligence, enabling every product team to use AI safely, efficiently, and at scale. If you're passionate about cutting-edge generative AI technologies and driven by building robust ML infrastructure that delivers value to millions of users, we would love to hear from you!
In this role, you will work at the intersection of machine learning engineering and MLOps, owning both the development and operationalization of AI/ML systems. Your responsibilities will span from fine-tuning and optimising large language models to building and maintaining the infrastructure that enables rapid experimentation and reliable deployment at scale. You will work with state-of-the-art technologies including LLMs, model evaluation frameworks, and modern ML infrastructure to build solutions that are medically safe and work at scale.
The AI Platform team acts as the central enabler of machine learning and AI initiatives across the organisation. Its mission is to reduce operational overhead and maximise ROI from ML use cases. The team builds and maintains critical infrastructure including LLM evaluation frameworks (AI Judges), model deployment pipelines, fine-tuning infrastructure, user profiles store, experiment tracking systems, and monitoring frameworks. By working closely with domain teams, the AI Platform team delivers scalable, high-quality solutions that accelerate time-to-market while ensuring compliance and maintaining the highest standards of performance.
What You'll Be Doing
Develop, fine-tune, and optimise large language models for domain-specific health applications, working with both proprietary and open-source models (Gemini, GPT, Llama, etc)
Design and maintain automated pipelines for model training, fine-tuning, evaluation, and deployment across diverse AI workloads
Build and enhance LLM evaluation frameworks (AI Judges) for measuring model safety, medical accuracy, and performance
Implement CI/CD practices for ML/AI engineering workflows, including experiment tracking, model versioning, and automated testing
Orchestrate seamless deployment of models, AI agents, and inference endpoints with automated testing and rollback capabilities
Implement comprehensive monitoring for model performance, drift detection, AI safety metrics, and responsible AI compliance
Constantly improve technical capabilities by researching and implementing best practices in the rapidly evolving space of LLMs and generative AI
Work in a cross-functional setup alongside other Flo Teams (Product, Security, Analytics, Marketing, Legal, etc)
Must have:
4+ years of professional experience in machine learning, with hands-on experience building and deploying production-grade AI/ML systems
Recent engineering experience with LLM infrastructure and tooling, including fine-tuning (LoRA, SFT or other), prompt engineering, and model evaluation
Strong Python programming skills for efficient model development, experimentation, and deployment
Experience with modern ML infrastructure tools such as MLflow, experiment tracking systems, and model registries
Databricks (or similar tooling) platform experience with Unity Catalog, MLflow, and Databricks Machine Learning for end-to-end AI/ML workflows
Cloud platform expertise with one of the Hyperscalers (AWS, GCP, or Azure) including AI-specific services
Understanding of the entire ML/LLM development lifecycle, including CI/CD, version control, testing, and agile methodologies
Ability to devise creative solutions to intricate technical challenges, including experience in systems design with the ability to architect and explain ML/LLM pipelines
Excellent communication skills and ability to collaborate with diverse teams
Commitment to responsible AI practices, including fairness, accountability, and transparency
Nice to have:
Experience with LLM fine-tuning techniques including LoRA adapters, preference optimisation (RLHF/DPO), and model distillation
Containerisation and orchestration experience with Docker, Kubernetes, and ML-specific operators
AI model serving experience with modern inference servers and API gateways for AI applications
Infrastructure as Code experience with Terraform, Ansible, or other IaC tools
Experienc