cover image
Crossing Hurdles

Crossing Hurdles

www.crossinghurdles.com

150 Jobs

28 Employees

About the Company

At Crossing Hurdles, we specialise in customised recruitment and staffing solutions designed to drive success for businesses and professionals. Our focus is on connecting organisations with top-tier talent by sourcing, screening, and presenting only the top 1% of candidates across a wide range of industries. We work closely with clients to understand their unique needs, ensuring that we find candidates who not only fit the role but also align with their organizational culture. Over the past few quarters, we've successfully partnered with leading companies such as Angel One, Ixigo, Turing, Cars24, Veera, ABP Network, Battery Smart, Zavya, and Twin Engineers. Our expertise spans various sectors, including Tech, Product, Sales, Customer Support, Growth, Finance, and Marketing. At Crossing Hurdles, our mission is to help organizations thrive by matching them with exceptional talent while simultaneously enabling candidates to find opportunities that foster long-term career growth and development.

Listed Jobs

Company background Company brand
Company Name
Crossing Hurdles
Job Title
AI Penetration Tester | $111/hr Remote
Job Description
**Job Title** AI Penetration Tester **Role Summary** Conduct advanced adversarial testing of AI models and agents, including jailbreak creation, prompt injection, misuse scenario development, and systemic risk assessment. Produce high‑quality annotated data, reproducible reports, and actionable threat intelligence across multiple client projects. **Expectations** * Deliver consistent, rigorous red‑team assessments on an hourly contract basis. * Rapidly absorb new AI technologies and adapt testing frameworks to evolving deployment contexts. * Communicate findings clearly to technical stakeholders through comprehensive documentation and datasets. **Key Responsibilities** 1. Red‑team AI systems by designing and executing jailbreaks, prompt injections, RLHF/DPO attacks, and model extraction exploits. 2. Generate annotated datasets that capture AI failures, classify vulnerabilities, and identify systemic risks. 3. Apply structured taxonomies, benchmarks, and playbooks to maintain consistency in testing across projects. 4. Document test outcomes and create reproducible reports, datasets, and attack case studies. 5. Support diverse projects, including LLM jailbreak testing and socio‑technical abuse scenarios, on a flexible, asynchronous schedule. **Required Skills** * Prior red‑team or adversarial ML experience (AI, cybersecurity, or socio‑technical probing). * Deep understanding of jailbreak datasets, prompt injection techniques, RLHF/DPO attacks, and model extraction methods. * Proficiency in penetration testing, exploit development, reverse engineering, and cybersecurity fundamentals. * Knowledge of socio‑technical risk areas: harassment, disinformation, abuse analysis. * Creative, psychology‑based probing (acting, writing, unconventional adversarial methods). **Required Education & Certifications** * Bachelor’s degree or higher in Computer Science, Cybersecurity, Artificial Intelligence, or related field (preferred). * Relevant certifications such as Certified Ethical Hacker (CEH), Offensive Security Certified Professional (OSCP), or equivalent are advantageous.
Canada
Remote
30-10-2025
Company background Company brand
Company Name
Crossing Hurdles
Job Title
AI Security Researcher | $111/hr Remote
Job Description
**Job Title** AI Security Researcher – Adversarial AI Testing (Red‑Team) **Role Summary** Hourly contract position focusing on adversarial testing of large language models and AI agents. Responsibilities include creating jailbreaks, prompt injections, RLHF/DPO attacks, and model‑extraction scenarios; generating and annotating human‑curated failure data; applying structured taxonomies for consistency; and producing reproducible reports, datasets, and attack case documentation. **Expectations** - Deliver high‑quality annotated datasets and vulnerability classifications. - Produce comprehensive, reproducible attack reports and datasets. - Support multiple LLM and socio‑technical abuse testing projects on an asynchronous schedule. - Adapt quickly to evolving AI risk landscapes and testing methodologies. **Key Responsibilities** - Conduct AI red‑team exercises, fabricating jailbreaks, prompt injections, and extraction techniques. - Generate human data on AI failures, classify vulnerabilities, and flag systemic risks. - Apply structured testing frameworks using taxonomies, benchmarks, and playbooks. - Document findings in detailed reports, datasets, and documented attack cases. - Flexibly support multiple client projects, including LLM jailbreaks and socio‑technical abuse testing. **Required Skills** - Prior AI red‑team or adversarial machine learning experience (jailbreak datasets, prompt injection, RLHF/DPO). - Cybersecurity proficiency: penetration testing, exploit development, reverse engineering. - Experience with socio‑technical risk domains (harassment, disinformation, abuse analysis). - Creative probing techniques (psychology, acting, writing) for unconventional adversarial methods. - Rapid learning ability and strong AI background. **Required Education & Certifications** - Bachelor’s degree in Computer Science, Cybersecurity, AI/ML, or related field (or equivalent experience). - Certifications such as CEH, OSCP, or related AI security credentials preferred but not mandatory.
United states
Remote
30-10-2025
Company background Company brand
Company Name
Crossing Hurdles
Job Title
AI Security Professional | $111/hr Remote
Job Description
Job title: AI Security Professional – AI Red‑Teamer (Adversarial AI Testing) Role Summary: Remote, hourly contract specialist who red‑teams AI models and agents to uncover and document vulnerabilities. Works 10–40 hours/week with flexible, asynchronous scheduling. Expectations: - Deliver high‑quality findings on AI jailbreaks, prompt injection, misuse scenarios, and exploit cases. - Produce actionable reports, datasets, and reusable attack blueprints for multiple client projects. - Adapt to varying scopes (LLM jailbreaks, socio‑technical abuse, RLHF/DPO attacks). Key Responsibilities: 1. Design and execute red‑team attacks against AI models and agents. 2. Annotate AI failures, classify vulnerabilities, and flag systemic risks. 3. Apply structured taxonomies, benchmarks, and playbooks for consistent testing. 4. Document results in reproducible reports and datasets. 5. Support multiple concurrent projects across diverse customers. Required Skills: - Proven red‑team experience (AI adversarial work, cybersecurity penetration testing, or socio‑technical probing). - Deep knowledge of adversarial machine learning: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction. - Cybersecurity fundamentals: penetration testing, exploit development, reverse engineering. - Familiarity with socio‑technical risk domains (harassment, disinformation, abuse). - Creative probing techniques (psychology, acting, writing) to craft unconventional adversarial methods. Required Education & Certifications: No explicit educational requirements stated; a technical background in AI, machine learning, cybersecurity, or related fields is essential. Certifications such as OSCP, CEH, or equivalent are advantageous but not mandatory.
United states
Remote
30-10-2025
Company background Company brand
Company Name
Crossing Hurdles
Job Title
Technical Reviewer - RL Environment Terminal Benchmarking | $100/hr Remote
Job Description
Job Title: Technical Reviewer – RL Environment Terminal Benchmarking Role Summary: Deliver expert evaluation of reinforcement learning environments and benchmark pipelines to ensure correctness, reproducibility, fairness, and alignment with agentic AI research goals. Provide detailed technical feedback on code, documentation, and methodology while collaborating with researchers and engineers. Expectations: - 10–40 hours per week, hourly contractor, remote. - Hourly rate $80–$100. - 20‑minute application process and AI interview. Key Responsibilities: - Review RL environment designs and terminal conditions for validity and research alignment. - Assess benchmarking pipelines for fairness, reproducibility, and accuracy across tasks. - Critically evaluate Python codebases (PyTorch/TensorFlow preferred) and accompanying documentation for experimental rigor. - Validate reproducibility across seeds, runs, and hardware configurations. - Collaborate with researchers and engineers to refine evaluation metrics and methodologies. - Document findings, recommend improvements, and maintain clear records of feedback. Required Skills: - Deep knowledge of reinforcement learning, computer science, or applied AI research. - Practical experience with RL environments and benchmark design. - Strong proficiency in reading and reviewing Python code; experience with PyTorch/TensorFlow is advantageous. - Excellent critical‑thinking, problem‑solving, and attention‑to‑detail abilities. - Commitment to experimental reproducibility, fairness, and standardization in agentic AI. Required Education & Certifications: - Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent experience. - Machine learning or reinforcement learning certifications are a plus.
United states
Remote
03-11-2025