- Company Name
- Ubisoft
- Job Title
- Senior Machine Learning Engineer - AI Initiatives (W/M/NB)
- Job Description
-
**Job Title**
Senior Machine Learning Engineer – AI Initiatives (W/M/NB)
**Role Summary**
Design, build, evaluate, and deploy AI systems that power Ubisoft’s advanced information lifecycle tools, including hybrid search engines, RAG pipelines, and agentic reasoning. Convert cutting‑edge ML research into scalable, production‑ready solutions for internal and external stakeholders.
**Expectations**
- Lead end‑to‑end ML product development from concept to production.
- Own model lifecycle: design, fine‑tune, evaluate, deploy, monitor, and iterate.
- Work closely with software, SRE, and product teams to embed AI features in scalable architectures.
- Translate internal platform usage data into actionable product insights.
- Demonstrate compliance with MLOps best practices and cloud-native deployment patterns.
**Key Responsibilities**
- Design, optimize, and maintain LLM‑based, retrieval‑augmented, and multimodal ML models for production use.
- Build and refine RAG pipelines, agent workflows, and vector search solutions.
- Deploy models via APIs using Docker and AWS, GCP, or Azure, ensuring scalability and cost efficiency.
- Implement monitoring, drift detection, CI/CD, and model registry practices.
- Conduct experiments, define evaluation frameworks, and continuously improve model quality.
- Analyze internal AI usage metrics (chat interactions, prompts, logs) to surface themes and product insights.
- Collaborate with cross‑functional teams to integrate AI capabilities into large‑scale systems.
**Required Skills**
- Proficiency in Python; experience with PyTorch, TensorFlow, JAX, or similar frameworks.
- Deep knowledge of LLMs, transformer architectures, embeddings, fine‑tuning, and prompt engineering.
- Strong background in RAG, hybrid search, vector databases, and query optimization.
- Production‑grade deployment skills: Docker, Kubernetes, serverless, and cloud APIs (AWS, GCP, Azure).
- Familiarity with MLOps pipelines: model registry, CI/CD, monitoring, drift detection, inference optimization (quantization, distillation, batching, caching).
- Data pipeline understanding, experiment design, and performance metrics.
- Optional: experience with agentic AI systems (LangGraph, CrewAI, Strands Agent), multimodal models, open‑source contributions, or cloud‑native AI workloads.
**Required Education & Certifications**
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or related field.
- Professional certifications in cloud platforms (AWS, GCP, Azure) or ML/AI are advantageous.