cover image
GoFundMe

Adversarial AI Engineer

Hybrid

San francisco, United states

$ 271,000 /year

Junior

Full Time

20-09-2025

Share this job:

Skills

Leadership Python Penetration Testing Incident Response Encryption CI/CD Monitoring Security Testing Research Training Architecture Security Architecture Machine Learning PyTorch TensorFlow Programming Organization Azure AWS GCP Data Science python programming

Job Specifications

Want to help us help others? We're hiring!

GoFundMe is the world's most powerful community for good, dedicated to helping people help each other. By uniting individuals and nonprofits in one place, GoFundMe makes it easy and safe for people to ask for help and support causes--for themselves and each other. Together, our community has raised more than $40 billion since 2010. Join us!

Every day, millions turn to us in their most vulnerable moments, and our technology must be safe, resilient, and trustworthy. We're looking for an Adversarial AI Engineer who combines offensive security expertise with machine learning depth. You will execute red team testing against our AI systems, build automated adversarial evaluation pipelines, and deploy defense mechanisms to keep our AI safe. Beyond technical execution, you'll influence governance, partner with cross-functional stakeholders, and establish GoFundMe as a thought leader in AI security.

This role is foundational. You'll lead red team operations and strengthen our AI/ML systems that power fraud detection, content moderation, and Trust & Safety at scale. You will also design adversarial testing frameworks, deploy real-time defenses, and set governance for secure AI deployment across GoFundMe's platform--protecting billions in charitable giving from evolving attack vectors.

Candidates considered for this role will be located in the San Francisco, Bay Area. There will be an in-office requirement of 3x a week.

The Job

Adversarial Testing & Red Teaming
Execute adversarial testing of LLMs, Agentic AI systems, recommendation models, and fraud detection tools using techniques such as prompt injection, jailbreaking, data poisoning, model inversion, and membership inference attacks.
Develop synthetic attack datasets tailored to fundraising and trust & safety scenarios.
Build Security Frameworks & Defenses
Develop automated adversarial testing pipelines integrated into CI/CD, create reusable robustness evaluation libraries, and generate synthetic attack datasets for fundraising scenarios.
Build reusable robustness evaluation libraries.
Deploy real-time detection for prompt injection and model evasion, implement input validation, output filtering, adversarial training, and differential privacy mechanisms.
Cross-Functional Security Leadership
Partner with Trust & Safety on fraud countermeasures.
Collaborate with Product Security on AI threat models and governance.
Work with Data Science to mitigate algorithmic bias and ensure robust defense.
Governance & Policy
Establish AI security policies, training, and deployment review processes aligned with NIST AI RMF.
Build monitoring and incident response systems for AI security.
Research & Innovation
Stay current with emerging attack vectors and defense mechanisms.
Contribute to open-source adversarial tools.
Publish and speak externally to advance GoFundMe's leadership in AI security.

You

6-8 years in cybersecurity with a focus on AI/ML security or adversarial ML.
2+ years specialized LLM security experience (prompt injection, jailbreaking, adversarial prompt crafting).
Proven red team / penetration testing background on AI systems.
Strong Python programming with ML frameworks (TensorFlow, PyTorch, Hugging Face).
Deep understanding of ML fundamentals, Neural Networks, transformers (GPT, LLaMA, Claude, BERT) and known vulnerabilities.
Experience testing Agentic AI security testing including agent frameworks (LangGraph, AutoGen, CrewAI, Google ADK, Pydantic AI).
Skilled in adversarial attack methods: data poisoning, model evasion, membership inference, model extraction.
Knowledge of defense mechanisms: adversarial training, input sanitization, differential privacy, robustness certification.
Hands-on adversarial attack experience: data poisoning, model evasion, membership inference, model extraction.
Familiarity with OWASP Top 10 for LLMs, MITRE ATLAS, NIST AI RMF.
Experience with threat modeling, security architecture, and cloud controls (AWS, GCP, Azure).

Preferred

Multimodal AI security experience (vision-language, audio).
Background in financial services, fintech, or sensitive transaction platforms.
AI compliance, audit, and regulatory experience.
Published AI security research or open-source contributions.
Trust & Safety or fraud detection systems experience.
Privacy-preserving ML techniques (federated learning, homomorphic encryption).

Why you'll love it here

Make an Impact: Be part of a mission-driven organization making a positive difference in millions of lives every year.
Innovative Environment: Work with a diverse, passionate, and talented team in a fast-paced, forward-thinking atmosphere.
Collaborative Team: Join a fun and collaborative team that works hard and celebrates success together.
Competitive Benefits: Enjoy competitive pay and comprehensive healthcare benefits.
Holistic Support: Enjoy financial assistance for things like hybrid work, family planning, along with generous parental leave, flexible time-

About the Company

There are a billion good intentions tucked inside each and every one of us. At GoFundMe, we believe that the impulse to help a person, fix a neighborhood, or change a nation should never be ignored. In fact, it should be shared with the entire world. That's why we make it easy to inspire the world and turn your compassion into action. By giving people the tools they need to capture and share their story far and wide, we have built a community of more than 200 million donors and helped organizers raise over $15 billion for t... Know more