cover image
Moonlite AI

Moonlite AI

moonlite.ai

1 Job

9 Employees

About the Company

Moonlite is building a cloud-native experience on-prem. Our software provides the control and customization enterprises need for AI. Build Faster with Moonlite Instantly download and deploy NIMS from NVIDIA or build your own applications with Hugging Face. Customize and deploy AI agents in one click or integrate your own with ease. Total Control Over Your AI Obtain the highest level of security by design for your private environments. Moonlite provides total visibility into all your resources, applications, and users. Find Value with Your Use Case Allocate resources in real-time as needed in your environment. Use the models that best align with your use cases. When a new model is released, test it out and power your applications with it.

Listed Jobs

Company background Company brand
Company Name
Moonlite AI
Job Title
Senior Software Engineer, Storage Platform
Job Description
**Job Title:** Senior Software Engineer, Storage Platform **Role Summary** Design and implement high-performance storage platforms for AI infrastructure, supporting massive datasets and enterprise-scale data processing requirements. **Expectations** 5+ years in software engineering with proven experience in storage platforms, distributed storage systems, or data infrastructure for production environments. **Key Responsibilities** - Design and build scalable storage orchestration systems for block, object, and file storage optimized for AI training datasets, model checkpoints, and large-scale data processing. - Develop systems for Kubernetes/SLURM clusters, enabling shared datasets, persistent storage, and high-throughput access for distributed training and batch workloads. - Implement storage solutions with low-latency, high-throughput performance for AI training, simulations, and real-time data processing. - Engineer robust data pipelines for ingestion, processing, and large-scale data movement. - Build multi-tiered storage orchestration (NVMe, SSD, high-capacity) aligned with access patterns and workload needs. - Implement enterprise-grade backup, snapshot, replication, and disaster recovery systems. - Develop storage APIs/SDKs for integration with compute platforms and data systems. - Design monitoring/optimization systems to track performance, capacity utilization, and access patterns. **Required Skills** - Kubernetes storage architecture (persistent volumes, storage classes, CSI drivers), container orchestration. - Expertise in block/object/file storage, distributed systems, performance optimization. - Proficiency in Python (expert level); experience with C/C++, Rust, or Go for critical components. - Strong Linux systems programming (file systems, storage subsystems, kernel-level interfaces). - Data pipeline engineering, ETL, and large-scale data processing systems. - Platform/API design for multi-tenancy, data isolation, and reliability. - Problem-solving for complex performance/scalability challenges in distributed environments. **Required Education & Certifications** Bachelor’s degree in Computer Science or related field; relevant certifications (e.g., cloud/storage technologies) preferred but not required.
Chicago, United states
Remote
Senior
11-11-2025