cover image
LeadStack Inc.

AI/ML Engineer - 25-02707

Hybrid

Stanford, United states

$ 65 /hour

Junior

Freelance

05-01-2026

Share this job:

Skills

Python Go Rust TypeScript SQL NoSQL Data Engineering GitHub CI/CD DevOps Docker Monitoring Version Control AWS Lambda AWS CloudFormation Networking Training Machine Learning PyTorch TensorFlow Regression Databases git AWS Agile Langchain CI/CD Pipelines Infrastructure as Code Microservices GitHub Actions

Job Specifications

Job Description

LeadStack Inc. is an award winning, one of the nation's fastest growing, certified minority owned (MBE) staffing services provider of contingent workforce. As a recognized industry leader in contingent workforce solutions and Certified as a Great Place to Work, we're proud to partner with some of the most admired Fortune 500 brands in the world.

Job Title: AI/ML Engineer (W2 only)

Duration:12 months

Location: Stanford, CA 94305 (Hybrid)

Position Overview

The AI/ML Engineer will be a key technical contributor driving CGOE’s AI transformation initiatives, with a focus on building and deploying intelligent, cloud-native applications including GenAI-powered systems, retrieval-augmented assistants, and data-driven automation workflows. Working at the intersection of machine learning, cloud engineering, and educational innovation, this role converts complex requirements into scalable, secure, and maintainable AWS-native AI systems that enhance teaching, learning, and operations across CGOE’s global online programs.

Top Requirements

3+ years deploying AI/ML applications in production environments.
Strong experience with Python and AWS (serverless, microservices, CI/CD, IAM).
At least one AWS Associate-level certification (e.g., Solutions Architect Associate, Developer Associate, SysOps Administrator Associate, Data Engineer Associate).

Key Responsibilities

AI Application & Systems Development

Own the design and end-to-end implementation of AI systems combining GenAI, narrow AI, and traditional ML models (e.g., regression, classification).
Implement retrieval-augmented generation (RAG), multi-agent, and protocol-based AI systems (e.g., Model Context Protocol/MCP) using modern frameworks such as LangChain and LlamaIndex or similar.
Integrate AI capabilities into production-grade applications using serverless and containerized architectures (AWS Lambda, Fargate, ECS).
Fine-tune and optimize existing models for specific educational and administrative use cases, focusing on performance, latency, and reliability.
Build and maintain data pipelines for model training, evaluation, and monitoring using AWS services such as Glue, S3, Step Functions, and Kinesis.

Cloud & Infrastructure Engineering

Architect and manage scalable AI workloads on AWS leveraging services such as SageMaker, Bedrock, API Gateway, EventBridge, and IAM-based security.
Build microservices and APIs to integrate AI models into applications and backend systems.
Develop automated CI/CD pipelines to ensure continuous delivery, observability, and monitoring of deployed workloads (e.g., GitHub Actions, CodePipeline).
Apply containerization best practices using Docker and manage workloads via AWS Fargate and ECS for scalable, serverless orchestration and reproducibility.
Ensure compliance with **(e.g., FERPA, GDPR-style requirements) for secure data handling and governance.

Collaboration, Culture & Continuous Improvement

Collaborate with cross-functional teams (engineering, product, academic stakeholders, operations) to deliver integrated and impactful AI solutions.
Use Git-based version control and follow code review best practices as part of a collaborative, agile workflow.
Operate within an agile, iterative development culture, participating in sprints, retrospectives, and planning sessions.
Continuously learn and adapt to emerging AI frameworks, AWS tools, and cloud technologies, contributing to documentation, internal knowledge sharing, and mentoring as the team scales.

Requirements

Education & Certifications

Bachelor’s degree in Computer Science, AI/ML, Data Engineering, or a related field (Master’s preferred).
At least one AWS Associate-level certification required; professional-level or specialty certifications (e.g., Machine Learning Specialty, Advanced Networking, Security) are a plus.

Experience

3+ years of experience developing and deploying AI/ML-driven applications in production environments.
2+ years of hands-on experience with AWS-based architectures (serverless, microservices, CI/CD, IAM).
Proven ability to design, automate, and maintain data pipelines for model inference, evaluation, and monitoring.
Experience with both GenAI and traditional ML techniques in applied, production settings.

Technical Skills

Languages: Python (required); familiarity with Go, Rust, R, or TypeScript preferred.
AI/ML Frameworks: PyTorch, TensorFlow, LangChain, LlamaIndex, or similar libraries for RAG and agentic workflows.
Cloud & Infrastructure: AWS SageMaker, Bedrock, Lambda, ECS/Fargate, API Gateway, EventBridge, Glue, S3, Step Functions, IAM, CloudWatch.
Infrastructure as Code: AWS CloudFormation.
DevOps & Tools: Git, Docker, AWS Fargate, ECS, CI/CD (GitHub Actions, CodePipeline).
Data Systems: SQL/NoSQL databases, vector databases, and AWS-native data services for AI workloads.

Desired Attributes

Strong understanding of data engineering fundamentals and production-quality AI system design.
Passion for applying AI to impro

About the Company

We are one of the nation's fastest-growing, award- winning staffing services provider of contingent workforce. As a nationally recognized MBE, we transform how businesses connect with top-tier talent. Our high-impact, high-value staff augmentation and SOW (T&M) solutions drive exceptional results across IT, Life Sciences, Marketing & Creative Services. Our recognitions include: - "Best Staffing Firms to Work For" 2023, 2024, 2025 by Staffing Industry Analysts - Certified Great Place to Work - "Fastest Growing Staffing Firm... Know more