- Company Name
- Hinge
- Job Title
- AI Product Manager, Trust & Safety
- Job Description
-
**Job title:** AI Product Manager, Trust & Safety
**Role Summary:**
Own end‑to‑end design and delivery of AI‑driven content moderation and bad‑actor detection systems. Work closely with engineering, design, research, legal and operations to translate policy into scalable AI workflows, set success metrics, and continuously iterate based on analytics and incident data.
**Expectations:**
- Define and launch real‑time moderation pipelines using LLMs, prompt‑engineering, and vendor tools (e.g., Langfuse).
- Translate Trust & Safety policy into automated workflows covering hate speech, self‑harm, fraud, etc.
- Drive a full product lifecycle: discovery, technical requirement definition, prototype review, launch, and iterative improvement.
- Set and recalibrate KPIs (throughput, false‑positive/negative rates, time‑to‑resolution, user trust scores).
- Balance AI roadmap (30%) with hands‑on strategy and implementation (70%).
- Lead rapid experimentation, A/B testing, error analyses, and post‑mortems to improve safety.
**Key Responsibilities:**
- Partner with AI engineers to design, prototype, and productionise moderation and detection systems.
- Develop automated policy workflows and integrate them into the platform.
- Guide engineering on model tuning, prompt engineering, and threshold decisions (e.g., selfie/ID verification, reporting workflows).
- Establish, monitor, and update safety metrics across releases.
- Coordinate cross‑functional alignment with Design, Research, Legal, Compliance, and Ops.
- Own experiments, prompt engineering pipelines, and continuous safety improvements.
- Document requirements, facilitate meetings, and produce clear, actionable sprint plans.
**Required Skills:**
- 3+ years product‑management experience in fast‑moving or startup settings.
- Deep technical proficiency with LLMs, prompt engineering, Chain‑of‑Thought, Retrieval Augmented Generation (RAG), and structured outputs.
- Hands‑on Python programming for rapid prototyping.
- Experience with OpenAI or similar APIs, Langfuse or equivalent tracing/error‑analysis platforms, no/low‑code tools (e.g., n8n).
- Strong ability to translate policy and compliance requirements into engineer‑friendly technical specs.
- Data‑driven mindset: experience setting and iterating metrics, A/B testing, and post‑mortem analysis.
- Excellent communication, documentation, and facilitation skills.
- Comfortable operating in ambiguous environments, managing evolving AI threats and regulatory changes.
**Required Education & Certifications:**
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent work experience).
- No mandatory certifications; domain knowledge in AI safety, compliance, or related certifications (e.g., Certified Ethical Emerging Technologist) is a plus.