cover image
NAVA Software Solutions

NAVA Software Solutions

www.navasoftware.com

2 Jobs

141 Employees

About the Company

At NAVA Software Solutions, we help enterprises, product companies, and Global Capability Centers (GCCs) scale with - . Headquartered in Connecticut with operations across the USA, Mexico, and India, we specialize in building AI-enabled platforms, modernizing legacy systems, and delivering automation-driven services across cloud, data, and software quality. Whether you're launching - , , or building offshore engineering capabilities, we bring the strategy, execution, and global delivery to make it happen. * AI-Powered Product Engineering * GenAI Solutions & Accelerators * Cloud & Data Modernization (AWS, Azure) * AI-Enabled Software Quality & Test Automation * Digital CX & Enterprise Application Modernization * Salesforce & AWS Integration BOT (Build-Operate-Transfer) | POD (Product-Oriented Delivery) | Hybrid | Managed Services | GCC-as-a-Service Healthcare | Financial Services | Manufacturing | Supply Chain & Logistics | Biotechnology | Energy & Utilities | Oil & Gas ? - Execution-ready AI teams and accelerators - Deep expertise across enterprise-scale cloud and data platforms - Proven delivery models for scaling innovation securely and efficiently - Experience delivering AI-driven solutions for real-world use cases Let's connect if you're looking to apply AI at scale, modernize platforms, or build your next engineering center.

Listed Jobs

Company background Company brand
Company Name
NAVA Software Solutions
Job Title
AI Governance Specialist -W2 Contract
Job Description
Job Title: AI Governance Specialist (W2 Contract) Role Summary Develop, implement, and maintain AI governance frameworks, policies, and controls to ensure responsible, ethical, and compliant use of AI across the organization. Collaborate with cross‑functional teams to embed governance practices, conduct risk assessments, monitor AI systems, and provide training and incident management. Expectations - Deliver robust AI governance structures aligned with global regulations (EU AI Act, US Executive Order, ISO/IEC 42001, NIST AI RMF). - Ensure continuous compliance, risk mitigation, and operational excellence for AI projects. - Foster a culture of responsible AI through training, awareness, and stakeholder engagement, achieving measurable governance metrics. Key Responsibilities 1. Develop and update AI governance frameworks, ethical guidelines, and operational standards. 2. Define and manage AI model lifecycle processes: design, development, validation, deployment, monitoring, retirement. 3. Conduct risk and impact assessments (AI RIA, bias audits), map risks to mitigation actions, and document Model Risk Management (MRM) procedures. 4. Partner with legal, privacy, data science, engineering, HR, and security teams to embed governance into workflows and committees. 5. Design KPIs and dashboards for model health, fairness, transparency, and performance; report metrics to leadership and regulators. 6. Create and deliver training on responsible AI practices, policy interpretation, and regulatory compliance. 7. Manage AI incident response, investigate breaches or failures, and recommend corrective actions. Required Skills - Deep knowledge of AI/ML concepts, model lifecycle, and data principles. - Expertise in AI regulations and standards: EU AI Act, ISO/IEC 42001, NIST AI Risk Management Framework, SR 11‑7, GDPR, DPDP Act, HIPAA. - Risk assessment, compliance audit, and incident management capabilities. - Strong communication, stakeholder management, and policy translation skills. - Analytical, documentation, and process‑oriented mindset. Required Education & Certifications - Bachelor’s or Master’s in Computer Science, Data Science, Law, Public Policy, Information Security, or related field. - 3–10+ years in governance, risk, compliance, AI/ML, or technology security. - Preferred certifications: Responsible AI/AI Ethics, ISO/IEC 42001 Practitioner, CDMP, CIPP, or equivalent.
United states
Remote
Junior
26-12-2025
Company background Company brand
Company Name
NAVA Software Solutions
Job Title
AI/ML Engineer
Job Description
**Job Title** Senior AI/ML Engineer – Platform & Infrastructure **Role Summary** Design, develop, and operate end‑to‑end machine learning and generative AI platforms at scale. Build real‑time and batch ML pipelines, manage cloud‑native infrastructure, and ensure robust deployment, monitoring, and annotation workflows across AWS and Kubernetes environments. **Expectations** - Deliver scalable, reliable, and cost‑efficient ML pipelines that meet production SLAs. - Continuously improve model training, inference, and lifecycle processes. - Collaborate across data science, platform engineering, and product teams to integrate cutting‑edge AI capabilities. **Key Responsibilities** - Design and implement scalable ML pipelines with Python, Java, PySpark, and Apache Spark (batch and streaming). - Build and operate real‑time data processing systems using Spark Structured Streaming. - Deploy and manage GenAI and ML infrastructure on AWS (EMR, EKS, EC2, S3). - Operate Kubernetes‑based ML platforms on EKS, including model serving and annotation systems. - Implement annotations and manage annotation platform deployments on EKS. - Optimize distributed storage and low‑latency access with Cassandra (or equivalent NoSQL). - Build production‑grade ML workflows using Databricks. - Ensure reliability, scalability, security, and cost optimization of ML platforms. - Implement MLOps practices: CI/CD, monitoring, logging, and model lifecycle management. - Support experimentation, training, inference, and evaluation of ML and GenAI models. **Required Skills** - Proficiency in Python and Java. - Hands‑on experience with PySpark, Apache Spark, Spark Streaming, Structured Streaming. - Advanced knowledge of AWS services: EMR, EKS, EC2, S3. - Production deployment of systems on Kubernetes (EKS). - Experience with Cassandra or other distributed NoSQL databases. - Strong experience with Databricks for ML and data engineering workflows. - Deep understanding of ML infrastructure, MLOps, and GenAI systems. - Experience with annotation pipelines and tooling. - Understanding of distributed systems, performance tuning, and fault tolerance. **Optional/Nice to Have** - Experience with LLMs, GenAI pipelines, or model serving frameworks. - Infrastructure as Code (Terraform, CloudFormation). - Monitoring tools (Prometheus, Grafana, CloudWatch). - Large‑scale, multi‑tenant ML platform experience. **Required Education & Certifications** - Bachelor’s or higher degree in Computer Science, Software Engineering, or related field (advanced degrees preferred). - Relevant certifications (e.g., AWS Certified Solutions Architect, Certified Kubernetes Administrator, Databricks Certified Associate) are advantageous.
Columbus, United states
Hybrid
08-01-2026