cover image
AI Security Institute

Principal GRC Engineer

On site

London, United kingdom

Senior

Full Time

27-09-2025

Share this job:

Skills

Risk Management CI/CD Monitoring Research

Job Specifications

About The AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.

We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

About The Team

Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product.

Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.

What You Might Work On

Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)
Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale
Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal
Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them
Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer
Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
Contribute to open standards and open source, and share lessons with the broader community where appropriate

If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it

Role Summary

Own and operationalise AISI's governance, risk, and compliance (GRC) engineering practice. This role sits at the intersection of security engineering, assurance, and policy, turning paper-based requirements into actionable, testable, and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements, and ensure compliance is continuous and evidence driven. You will also extend GRC disciplines to frontier AI systems, integrating model lifecycle artefacts, evaluations, and release gates into the control and evidence pipeline.

Responsibilities

Translate regulatory frameworks (e.g. GovAssure, CAF) into programmatic controls and technical artefacts
Build and maintain a continuous control validation and evidence pipeline
Develop and own a capability-based risk management approach aligned to AISI's delivery model
Maintain the AISI risk register and risk acceptance/exception handling process
Act as the key interface for DSIT governance, policy, and assurance stakeholders
Work cross-functionally to ensure risk and compliance are embedded into AISI delivery lifecycles
Extend controls and evidence to the frontier AI model
Integrate AI safety evidence (e.g., model/dataset documentation, evaluations, red-team results, release gates) into automated compliance workflows
Define and implement controls for model weights handling, compute governance, third-party model/API usage, and model misuse/abuse monitoring
Support readiness for AI governance standards and regulations (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894; EU AI Act exposure where relevant)

Profile Requirements

Staff or Principal-level engineer or technical GRC specialist
Experience in compliance-as-code, control validation, or regulated cloud environments
Familiar with YAML, GitOps, structured artefacts, and automated policy checks
Equally confident in engineering meetings and policy/gov forums
Practical understanding of frontier AI system risks and artefacts (e.g., model evaluations, red-teaming, model/dataset documentation, release gating, weights handling) sufficient to translate AI policy into controls and machine-checkable evidence
Desirable: familiarity with MLOps tooling (e.g., experiment tracking, model registries) and integrating ML artefacts into CI/CD or evidence pipelines

Key Competencies

Translating policy into technical controls
Designing controls as code or machine-checkable evidence
Familiarity with frameworks (GovAssure, CAF, NIST) and AI governance standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894)
Experience building risk management workflows, including for AI-specific risks (model misus

About the Company

We’re building a team of world leading talent to advance our understanding of frontier AI and strengthen protections against the risks it poses – come and join us: https://www.aisi.gov.uk/. The AISI is part of the UK Government's Department for Science, Innovation and Technology. Know more