Job Specifications
Responsible Artificial Intelligence (A.I.) Operations Lead
12 month contract opportunity, high potential to extend
"Hybrid" work environment, 4 days per week in downtown Toronto office
"Big 5" Bank.
Job description:
Seeking a senior individual contributor who is "hands-on" with Responsible A.I. and GenAI governance, who can define and defend metrics under pressure, and can stand alone in risk forums in a regulated banking environment
Seeking someone who thinks like an engineer, speaks like a leader and defends like a regulator.
Go deep technically when challenged
Speak confidently and clearly in governance / risk forums
Defend decisions with metrics, not opinions
Look under the hood of AI systems (especially GenAI)
Operate comfortably in a regulated banking environment
NOTES
-This is a new role, with a new AI team / Centre of excellence they are building from the ground up
- Senior Individual Contributor who can be put in front of the Forum to defend the solution proposed and justify the WHY
- 6–10 years total experience
- Minimum 4 years directly working with Responsible AI / AI Governance / Model Risk
- Must be "hands-on" — configuring, measuring, testing, and defending AI systems
- Not a Director. Not a delegator. Not strategy-only
Role Overview
The Responsible AI Operations (RAIOps) Lead is responsible for operationalizing Responsible AI practices across the AI portfolio. This position ensures that AI solutions are ethical, explainable, compliant, and aligned with both regulatory requirements and business objectives.
The RAIOps Lead collaborates with cross-functional teams to embed Responsible AI into every stage of the AI lifecycle, from ideation to production.
Key Responsibilities:
Develop, implement, and maintain Responsible AI frameworks, policies, and controls in alignment with CDAO and enterprise AI
Conduct risk assessments and monitor AI models for bias, drift, and regulatory compliance
Oversee operational reliability, security, and ethical use of AI systems across the AI lifecycle
Collaborate with business, compliance, legal, and technology teams to align AI solutions with
organizational guidelines, policies and regulatory expectations
Lead incident management and escalation for AI-related issues
Drive continuous improvement in Responsible AI practices, including automation, monitoring,
and reporting
Liaise with AI, Risk and Governance forums
Present use cases for approval to the applicable forums
Ensure all risk and responsible AI questionnaires are completed reviewed, submitted and
updated on the agreed cadence
REQUIRED SKILLS AND TOOLS:
Deep knowledge of AI/ML technologies, regulatory standards, and ethical frameworks.
Stakeholder management and leadership skills.
Proven ability to communicate complex technical concepts to non-technical audiences.
Experience with program/project management and team leadership.
Technical proficiency in AI/ML operations, monitoring, and automation tools (e.g., MLOps
platforms, model monitoring solutions, risk assessment tools).
Familiarity with cloud platforms (Azure, AWS, GCP) and data governance tools.
Analytical and problem-solving skills, especially in risk mitigation and compliance.
$110 - $125 per hour for an INC consultant
Based on 37.5 hours per week
If you're interested, please apply with resume
If you are a strong fit, I will be in contact quickly with additional information and next steps
Best regards,
About the Company
Alquemy challenges traditional thinking and drives innovation through expert delivery of IT consultants, contractors and permanent staff in the areas of IT operations, software development, security, infrastructure, cloud technologies, and IT transformation. Our team creates measurable business value to help your organization gain a competitive edge.
Alquemy has grown quickly since opening our doors; today Alquemy operates across Canada and is actively working with the top contract and full-time talent across North America....
Know more