cover image
Red Hat

Machine Learning Engineer

Hybrid

Boston, United states

$ 220,680 /year

Full Time

21-01-2026

Share this job:

Skills

Communication Python Go Rust Kubernetes Test Networking Research Training Architecture Linux Machine Learning Deep Learning benchmarking C++ gRPC

Job Specifications

Job Summary

At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat AI Inference Engineering team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM-D projects, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments.

As a Machine Learning Engineer focused on distributed vLLM infrastructure in the llm-d project, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in scalable inference systems and Kubernetes-native deployments. Your work with machine learning, distributed systems, high performance computing, and cloud infrastructure will directly impact the development of our cutting-edge software platform, helping to shape the future of AI deployment and utilization. If you want to solve cutting edge problems at the intersection of deep learning, distributed systems, and cloud-native infrastructure the open-source way, this is the role for you.

Join us in shaping the future of AI!

What You Will Do

Contribute to the design, development, and testing of new features and solutions for Red Hat AI Inference
Innovate in the inference domain by participating in upstream communities
Develop and maintain distributed inference infrastructure leveraging Kubernetes APIs, operators, and the Gateway Inference Extension API for scalable LLM deployments.
Develop and maintain system components in Go and/or Rust to integrate with the vLLM project and manage distributed inference workloads.
Develop and maintain KV cache-aware routing and scoring algorithms to optimize memory utilization and request distribution in large-scale inference deployments.
Enhance the resource utilization, fault tolerance, and stability of the inference stack.
Develop and test various inference optimization algorithms.
Actively participate in technical design discussions
Contribute to a culture of continuous improvement by sharing recommendations and technical knowledge with team members
Collaborate with other engineering and cross-functional teams to deliver on engineering deliverables
Communicate effectively to team members to ensure proper visibility of development efforts
Be taught, coached, and mentored by senior members of the team
Provide timely and constructive code reviews

What You Will Bring

Strong proficiency in Python and/or GoLang or similar language
Experience with cloud-native Kubernetes service mesh technologies/stacks such as Istio, Cilium, Envoy (WASM filters), and CNI.
Working understanding of Layer 7 networking, HTTP/2, gRPC, and the fundamentals of API gateways and reverse proxies.
Knowledge of serving runtime technologies for hosting LLMs, such as vLLM, SGLang, TensorRT-LLM, etc.
Excellent written and verbal communication skills, capable of interacting effectively with both technical and non-technical team members.
Ability work independently in a dynamic, fast-paced environment

Following is considered a plus

Proficiency in C, C++, or Rust
Experience with the Kubernetes ecosystem, including core concepts, custom APIs, operators, and the Gateway API inference extension for GenAI workloads.
Working knowledge of high-performance networking protocols and technologies including UCX, RoCE, InfiniBand, and RDMA is a plus.
Experience with GPU performance benchmarking and profiling tools like NVIDIA Nsight or distributed tracing libraries/techniques like OpenTelemetry.
Experience in writing high performance code for GPUs and deep knowledge of GPU hardware
Strong understanding of computer architecture, parallel processing, and distributed computing concepts
Bachelor's degree in computer science or related field is an advantage, though we prioritize hands-on experience
Active engagement in the ML research community (publications, conference participation, or open source contributions) is a significant advantage

#AI-HIRING

The salary range for this position is $133,650.00 - $220,680.00. Actual offer will be based on your qualifications.

Pay Transparency

Red Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. Annual salary is one component of Red Hat’s compensation package. This position may also be eligible for bonus, commission, and/or equity. For positions with Remote-US locations, the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience.

About Red Hat

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes te

About the Company

Red Hat is the world's leading provider of enterprise open source solutions, using a community-powered approach to deliver high-performing Linux, hybrid cloud, edge, and Kubernetes technologies. We hire creative, passionate people who are ready to contribute their ideas, help solve complex problems, and make an impact. Opportunities are open. Join us. Know more