cover image
Enigma

Enigma

www.enigma-rec.ai

6 Jobs

33 Employees

About the Company

Here at Enigma, we specialize in Generative AI recruitment, specifically focused on Machine Learning and Software Engineering disciplines. With a combined experience of 20+ years, we understand the intricacies of finding the perfect role as well as the right talent for your team.

But what sets Enigma apart? Our consultative approach. We don't just match candidates with job openings; we guide candidates, founders, and hiring managers through the recruitment process. Our value-added services go beyond traditional recruitment efforts, offering services such as recruitment process structuring, salary benchmarking, share allocation, CV reviews and so much more.

We're more than recruiters – we're partners in your success journey. Our mission is not only to connect you with exceptional talent and companies but also to support your growth and success every step of the way. We're committed to your success. Let us partner with you to connect you with people and companies alike, that will drive growth and innovation for years to come.

Listed Jobs

Company background Company brand
Company Name
Enigma
Job Title
Research Scientist | Diffusion Modeling | Python | PyTorch | Machine Learning | Generative Modeling | Hybrid, San Francisco
Job Description
**Job Title** Research Scientist – Diffusion Modeling (Python, PyTorch) **Role Summary** Lead the design and deployment of cutting‑edge diffusion‑based generative models for protein engineering. Translate biological objectives into machine‑learning solutions, ensuring high‑quality, data‑driven models that can be validated in laboratory experiments. **Expectations** * Deep expertise in generative modeling, especially diffusion methods. * Proven track record of research impact through publications, open‑source contributions, or product deployments. * Strong engineering discipline—write clean, well‑tested Python (PyTorch) code, manage CI/CD, and produce scalable, production‑grade systems. * Ability to build and maintain large‑scale data pipelines and optimize model performance on distributed/cloud hardware. * Collaborative mindset: work closely with interdisciplinary research and engineering teams, and with wet‑lab scientists to align model outputs with experimental needs. **Key Responsibilities** 1. Design, implement, and iterate diffusion‑based generative models for functional protein creation. 2. Curate, split, and preprocess training/evaluation datasets; develop robust data pipelines. 3. Define and maintain evaluation metrics that reflect real‑world experimental goals. 4. Prototype rapid proof‑of‑concept models and transition promising approaches to production-level code. 5. Collaborate in a shared codebase, review peers’ work, and contribute to shared infrastructure (compute, experiment tracking). 6. Liaise with experimental teams to plan in‑vitro tests, run inference on biological targets, and incorporate lab‑feedback into model refinement. 7. Stay current with advances in machine learning, diffusion theory, and protein science; disseminate knowledge within the group. **Required Skills** * Proficiency in Python, PyTorch, and related ML libraries. * Expertise in generative modeling, particularly diffusion and transformer‑style architectures. * Experience training large‑scale models on cloud or distributed systems (GPU/TPU). * Strong software engineering practices: version control (Git), test‑driven development, CI/CD, containerization. * Data engineering: pipeline construction, data inspection, dataset splitting, logging, and monitoring. * Model optimization: architecture tuning, throughput improvement, and evaluation metric design. * Excellent communication and teamwork abilities. **Required Education & Certifications** * PhD or Master’s degree in Computer Science, Machine Learning, Computational Biology, or related field. * Demonstrated research experience in generative modeling or a closely related ML domain. * Prior experience with protein science or life‑science applications is highly desirable.
San francisco bay, United states
Hybrid
31-12-2025
Company background Company brand
Company Name
Enigma
Job Title
Senior Machine Learning Engineer | Python | PyTorch | Machine Learning | Large Language Models | RAG | Remote, UK
Job Description
**Job Title:** Senior Machine Learning Engineer **Role Summary:** Lead the development and deployment of machine learning infrastructure, transitioning AI research into scalable, production-grade solutions for global customer impact. **Expectations:** - Own end-to-end ML systems from prototype to production. - Drive cross-functional collaboration with product and leadership teams. - Deliver measurable improvements in AI agent performance and product innovation. **Key Responsibilities:** - Design and implement evaluation frameworks for ML model performance tracking. - Develop and manage production ML pipelines, ensuring reliability and scalability. - Integrate ML systems into customer-facing products across teams. - Optimize ML architectures and experiment with RAG and prompt engineering. - Scale infrastructure and tooling to support rapid growth. - Mentor junior engineers through hands-on technical leadership. **Required Skills:** - 5+ years of production ML experience, including system scaling. - Proficiency in Python, PyTorch, and frameworks like LangChain. - Strong foundation in classical and deep learning, with expertise in LLMs/transformers. - Track record of deploying ML into real-world products. - Experience in cross-functional collaboration and autonomous decision-making. **Required Education & Certifications:** Not specified.
United kingdom
Remote
Senior
30-01-2026
Company background Company brand
Company Name
Enigma
Job Title
Computer Vision Research Engineer
Job Description
**Job Title** Computer Vision Research Engineer **Role Summary** Design, develop, and deploy computer vision solutions for realistic sign language video generation. Lead use of Gaussian Splatting and diffusion model techniques, train on proprietary datasets, and optimize performance for production use. **Expectations** - Deliver high‑fidelity pose‑to‑video and video enhancement models using diffusion and Gaussian Splatting. - Scale models for real‑time inference on cloud or on‑premise infrastructure. - Collaborate with cross‑functional teams, manage model lifecycle, and publish technical findings. **Key Responsibilities** 1. Explore and adapt open‑source diffusion and Gaussian Splatting models for sign language video synthesis. 2. Train and fine‑tune diffusion video generation models on proprietary pose and video data. 3. Train Gaussian Splatting models to generate 3D body movement representations from sign language data. 4. Optimize inference pipelines (TensorRT, ONNX, quantisation, distillation) to meet latency and scalability targets. 5. Deploy models on compute clusters (e.g., Condor, SLURM) and manage deployment infrastructure. 6. Work with data engineering and translation teams to curate high‑quality datasets for training and evaluation. 7. Document methodologies, produce reproducible pipelines, and contribute to open‑source or academic outputs. **Required Skills** - 2+ years experience with diffusion models (training & inference). - 3+ years experience applying computer vision techniques in a commercial setting. - 3+ years professional Python development. - Proven deployment of vision solutions at scale (cloud or cluster). - Familiarity with Docker, Git, and CI/CD workflows. - Strong knowledge of computer vision foundations (feature extraction, optical flow, 3D reconstruction). - Experience optimizing models for inference (TensorRT, ONNX, quantisation). - Ability to analyze and improve system latency and throughput. **Required Education & Certifications** - Bachelor’s degree in Computer Science, Electrical Engineering, or a related science discipline. ---
United kingdom
Remote
Junior
03-02-2026
Company background Company brand
Company Name
Enigma
Job Title
Senior Infrastructure Engineer | Kubernetes | Docker | Terraform | Python | GPU | Onsite, London
Job Description
**Job Title** Senior Infrastructure Engineer – Kubernetes, Docker, Terraform, Python, GPU **Role Summary** Lead the design, deployment, and operation of a production Kubernetes platform that powers long‑running, failure‑prone reinforcement‑learning agent workloads. Own end‑to‑end lifecycle of containerised evaluation environments, ensuring high availability, efficient GPU utilisation, robust observability, and secure sandboxing for untrusted code execution. **Expectations** - Deliver on‑call and incident response for mission‑critical workloads. - Drive infrastructure reliability, scalability, and performance improvements. - Collaborate with research, data science, and ops teams to align environment capabilities with evolving training needs. - Maintain clear, actionable documentation, runbooks, and dashboards. **Key Responsibilities** - Own and evolve the Kubernetes runtime: scheduling, lifecycle management, and autoscaling for multi‑hour/day agent runs. - Optimize GPU scheduling, resource allocation, and image layering to minimise cold‑start times and maximize utilization. - Design storage patterns for datasets, model checkpoints, and transient state. - Build observability: metrics, logs, traces, dashboards, and alerting tied to SLOs (e.g., rollout success rate, environment health, queue latency). - Create debugging playbooks and runbooks for OOMs, memory leaks, performance regressions, and network/storage issues. - Implement reliability engineering: retry/backoff strategies, checkpointing, idempotence, graceful degradation, and chaos‑testing for failure injection. - Harden sandboxing: container isolation, network policies, secrets management, audit logging, and rate limiting of external API calls. **Required Skills** - Deep production experience managing Kubernetes (resource limits, affinity/taints, priorities, autoscaling, node health). - Strong distributed‑systems fundamentals: idempotency, retries, failure domains, incident response. - Practical observability: metrics, structured logging, tracing. - Ability to build tooling in Python and/or Go. - Infrastructure‑as‑code and automation: Helm, Terraform, GitOps workflows. - Redis expertise for high‑throughput, session‑oriented workloads. - GPU scheduling, container runtimes, Linux performance tuning, and networking fundamentals. **Required Education & Certifications** - Bachelor’s (or higher) degree in Computer Science, Engineering, or related field, or equivalent professional experience. - Kubernetes certification (CKA/CKAD) preferred. - Terraform, cloud‑infra, or DevOps certifications advantageous.
London, United kingdom
On site
Senior
15-02-2026