cover image
Anaplan

Associate AI QA Engineer

On site

Manchester, United kingdom

Full Time

17-12-2025

Share this job:

Skills

Communication Python JavaScript Jira CI/CD Monitoring Test Quality Assurance Selenium Test Automation Security Testing Performance Testing Problem-solving Decision-making Attention to detail Regression software testing api testing Postman CI/CD Pipelines

Job Specifications

At Anaplan, we are a team of innovators focused on optimizing business decision-making through our leading AI-infused scenario planning and analysis platform so our customers can outpace their competition and the market.

What unites Anaplanners across teams and geographies is our collective commitment to our customers’ success and to our Winning Culture.

Our customers rank among the who’s who in the Fortune 50. Coca-Cola, LinkedIn, Adobe, LVMH and Bayer are just a few of the 2,400+ global companies who rely on our best-in-class platform.

Our Winning Culture is the engine that drives our teams of innovators. We champion diversity of thought and ideas, we behave like leaders regardless of title, we are committed to achieving ambitious goals, and we love celebrating our wins – big and small.

Supported by operating principles of being strategy-led, values-based and disciplined in execution, you’ll be inspired, connected, developed and rewarded here. Everything that makes you unique is welcome; join us and let’s build what’s next - together!

We're pioneering a new role focused exclusively on quality assurance for GenAI systems. As our AI QA Engineer, you'll develop testing strategies, evaluation frameworks, and quality metrics specifically designed for LLM-powered applications. This role requires a unique blend of QA expertise, understanding of GenAI behaviour, and automation skills to ensure our AI features are reliable, accurate, and trustworthy.

Your Impact

Design and implement comprehensive testing strategies for GenAI features, including conversational AI, agentic systems, and LLM-powered workflows
Develop automated test suites for prompt testing, including regression tests that detect unintended changes in model behaviour
Create evaluation frameworks to measure GenAI quality across multiple dimensions (accuracy, relevance, safety, consistency, latency)
Build and maintain test datasets and golden examples that represent diverse user scenarios and edge cases
Implement monitoring and alerting systems to detect quality degradation in production GenAI features
Perform adversarial testing to identify potential failures, hallucinations, biases, or security vulnerabilities in AI systems
Collaborate with engineers to define acceptance criteria and quality gates for AI feature releases
Develop tools and frameworks that make it easy for engineers to test their GenAI implementations
Conduct user acceptance testing and gather feedback on AI feature performance from internal users
Document testing procedures, known issues, and quality metrics in clear, accessible formats
Partner with Product and Design teams to ensure AI features meet user experience standards
Stay current with GenAI testing methodologies, tools, and industry best practices

Your Qualifications

QA or test engineering experience, preferably with AI/ML systems.
Strong understanding of GenAI technologies including LLMs, prompt engineering, and AI application patterns
Experience with test automation frameworks and scripting (Python, JavaScript, Selenium, Pytest)
Knowledge of software testing methodologies (functional, integration, regression, performance, security testing)
Ability to design test cases and evaluation criteria for non-deterministic systems
Strong analytical and problem-solving skills with attention to detail
Experience with API testing tools (Postman, REST Assured) and backend testing
Familiarity with CI/CD pipelines and automated testing integration
Excellent communication skills for documenting issues and collaboration

Preferred Qualifications

Experience testing conversational AI, chatbots, or agentic systems
Knowledge of ML model evaluation metrics and techniques
Familiarity with LLM evaluation frameworks (LangSmith, PromptFoo, Ragas)
Experience with performance testing and load testing AI APIs
Understanding of responsible AI principles, including fairness, transparency, and safety testing
Background in enterprise software or SaaS QA
Experience with test management tools (TestRail, Zephyr, Jira)
Knowledge of security testing methodologies for AI systems
Scripting experience with Python, including working with LLM APIs

What Makes This Role Exciting

Define QA practices for GenAI applications
Work on cutting-edge AI technologies and help ensure they're reliable and trustworthy
Shape quality standards that will impact millions of enterprise users
Collaborate closely with engineers, data scientists, and product teams
Grow expertise in a highly specialized and increasingly important domain
Influence the entire AI product development lifecycle from design to release
Join a team that values quality as a first-class concern, not an afterthought

Our Commitment to Diversity, Equity, Inclusion and Belonging (DEIB)

We believe attracting and retaining the best talent and fostering an inclusive culture strengthens our business. DEIB improves our workforce, enhances trust with our partners and customers, and drives business success. Build yo

About the Company

Anaplan is the only scenario planning and analysis platform designed to optimize decision-making in today’s complex business environment so that enterprises can outpace their competition and the market. By building connections and collaboration across organizational silos, our platform intelligently surfaces key insights — so businesses can make the right decisions, right now. Know more