Job Specifications
We’re looking for a AI Governance Consultant to work closely with a major Energy & Utilities client and JBS leadership. This is a hands-on, contract consulting role that combines AI governance, risk management, and practical enablement for GenAI / agentic platforms.
Role Overview
You will help define and run AI governance for a federated GenAI / agentic platform:
• A central AI platform team owns the core agentic AI capabilities.
• Multiple business units bring their own AI developers to build solutions on top.
Your mandate:
Enable fast, self-service innovation while keeping risk, security, and compliance under control.
You’ll partner with the client’s Head of AI / platform leaders, security and legal teams, and JBS delivery teams. Occasional on-site visits to Houston, TX will be required for workshops, stakeholder sessions, and key project milestones.
Key Responsibilities
1. Federated AI Governance & Operating Model
• Design and refine a federated governance model for GenAI and agentic solutions (central guardrails + BU self-service).
• Define roles and responsibilities across the central platform team, business units, security, legal, and business sponsors.
• Balance governance vs. speed – focus on “guardrails, not gates.”
2. Risk Assessment, Scoring & Registry
• Implement and refine a risk scoring process for AI/GenAI solutions (e.g., capabilities, tool type, data sensitivity, impact).
• Establish thresholds where high-risk solutions must engage the AI governance function.
• Help developers self-assess low/medium-risk solutions, with clear escalation paths.
• Stand up and maintain a registry of AI solutions & POCs capturing:
• Use case, owner, data/model details
• Risk score, approvals, lifecycle stage, and review history
3. Adversarial Testing & Observability
• Define standard adversarial testing templates for LLM/GenAI use cases:
• Jailbreaks, prompt injection, harmful/violent content, PII leakage, bias, hallucinations, etc.
• Collaborate with platform and engineering teams to design solution-specific adversarial tests.
• Partner with the client’s platform/monitoring teams on AI observability:
• Monitoring for jailbreaks and harmful output
• Logging, metrics, and reporting for stakeholders
• Educate stakeholders on how to interpret safety and risk reports (e.g., harmful content flags, thumbs-down metrics).
4. POC & Trial Governance
• Define lightweight governance for POCs and trials:
• Approved tools/platforms
• Simplified risk profiles
• Explicit risk acceptance by department heads for POCs
• Help set up logging and approval flows so the client can track:
• Who is building what, with which data and tools, under whose approval.
5. AI Governance Hub & Automation
• Help enhance a SharePoint-based AI Governance Hub (or similar platform) that includes:
• Policies, risk standards, templates, FAQs, and POC guidelines.
• Collaborate with platform / automation teams to:
• Automate form submissions for risk profiles and tool evaluation requests.
• Build simple apps/agents (e.g., in Copilot Studio, Power Apps, or similar) to guide users through governance processes.
6. Tooling & Build vs Buy
• Evaluate third-party AI governance tools (e.g., OneTrust or similar) vs. in-house approaches:
• Map requirements, assess out-of-the-box coverage, and identify gaps.
• Work with Security, Legal, and IT to define security and lifecycle scoring for AI agents and models.
• Provide pragmatic recommendations that balance control, cost, and time-to-value.
7. Stakeholder Engagement & Workshops
• Run workshops and working sessions (in person and virtual) with:
• AI platform team, BU dev teams, security, legal, compliance, and business leaders.
• Prepare and deliver training sessions for:
• Developers (how to use risk tools, testing templates, and observability dashboards).
• Business owners (how to interpret risk scores and reports).
• Executives (how governance enables safe GenAI adoption).
Required Qualifications
• Location: Based in the US, with eligibility to work as an independent contractor or through a contracting firm.
• Experience:
• 8+ years in Data/AI, Information Security, Risk, Compliance, or related fields.
• At least 3+ years in AI/ML or AI governance / model risk / Responsible AI.
• Demonstrated experience designing or running governance for AI/ML or GenAI solutions (not just traditional IT).
• Strong understanding of:
• Risk assessment and scoring for AI models/solutions.
• Adversarial testing / red-teaming concepts for LLMs (jailbreaks, prompt injection, harmful content, data leakage).
• Lifecycle governance across POC → pilot → production.
• Proven ability to translate governance concepts into practical workflows:
• Templates, checklists, forms, and standard operating procedures.
• Excellent communication and facilitation skills with cross-functional teams:
• Security, legal, engineering, business, and executives.
• Comfortable working with distri