Job Specifications
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Role
You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.
Our Blog Provides An Overview Of Topics That The Alignment Science Team Is Either Currently Exploring Or Has Previously Explored. For The London Team, We Are Opportunistically Hiring For The Following Research Areas
AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
Alignment Stress-testing: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
Note: Currently, the team's hub is in San Francisco, so we require all candidates to be based at least 25% in London and travel to San Francisco occasionally. Additionally, we are prioritizing growing our San Francisco teams, so you may not hear back on your application to the London team unless we see an unusually strong fit. For this role, we conduct all interviews in Python.
Representative Projects
Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.
Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.
Contribute ideas, figures, and writing to research papers, blog posts, and talks.
Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.
You May Be a Good Fit If You
Have significant software, ML, or research engineering experience
Have some experience contributing to empirical AI research projects
Have some familiarity with technical AI safety research
Prefer fast-moving collaborative projects to extensive solo efforts
Pick up slack, even if it goes outside your job description
Care about the impacts of AI
Strong Candidates May Also
Have experience authoring research papers in machine learning, NLP, or AI safety
Have experience with LLMs
Have experience with reinforcement learning
Have experience with Kubernetes clusters and complex shared codebases
Candidates Need Not Have
100% of the skills needed to perform the job
Formal certifications or education credentials
The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
Annual Salary
£250,000—£270,000 GBP
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How We're Different
We believe that the highest-impact AI research will be big science. At Anth
About the Company
We're an AI research company that builds reliable, interpretable, and steerable AI systems. Our first product is Claude, an AI assistant for tasks at any scale.
Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.
Know more