- Company Name
- w3r Consulting
- Job Title
- Principal Information Security Engineer - AI
- Job Description
-
**Job title:** Principal Information Security Engineer – AI
**Role Summary:**
Lead the design, implementation, and oversight of security measures for AI systems, ensuring resilience against threats, alignment with emerging regulations, and integration of secure practices throughout the AI lifecycle.
**Expectations:**
- Architect robust security frameworks for AI infrastructure.
- Conduct thorough vulnerability assessments and penetration testing on AI models.
- Drive secure AI development by integrating best practices into training, validation, and deployment.
- Monitor and apply industry standards (e.g., NIST, ISO, GDPR) and emerging AI security regulations.
- Lead research, experimentation, and proof‑of‑concept projects to innovate security solutions.
- Mentor and train cross‑functional teams on AI security principles.
**Key Responsibilities:**
1. **Security Architecture Design** – Build secure frameworks, codify secure coding practices, and enforce design principles for AI systems.
2. **Vulnerability Assessment** – Deploy and maintain penetration testing tools; identify & remediate security gaps in AI models and underlying infrastructure.
3. **Secure AI Development** – Collaborate with data scientists and software engineers to embed security into AI development lifecycle, covering model training, validation, and deployment.
4. **Compliance & Standards** – Track and implement emerging AI security standards, regulations, and best‑practice guidelines (NIST, ISO, GDPR, etc.).
5. **Research & Innovation** – Conduct research on AI security threats; develop and prototype new security solutions.
6. **Documentation & Reporting** – Produce SOPs, protocols, and detailed security reports with actionable recommendations.
7. **Advisory & Support** – Serve as subject‑matter expert, addressing security queries and advising on best practices.
8. **Technical Training & Mentorship** – Deliver training sessions and mentor team members on AI security.
9. **Experimentation & POCs** – Design and run experiments/POCs to validate new threats and mitigation techniques; lead R&D initiatives.
**Required Skills:**
- Advanced knowledge of AI/ML technologies, algorithms, and model architectures.
- Expertise in secure coding, encryption, access controls, and secure system design.
- Proficiency with vulnerability assessment and penetration testing tools (e.g., Metasploit, Burp Suite, custom AI‑specific tools).
- Strong analytical and problem‑solving abilities; proven track record in conducting risk assessments.
- Excellent written and verbal communication; capable of translating technical findings to non‑technical stakeholders.
- Leadership and mentorship capability; experience guiding cross‑functional teams.
**Required Education & Certifications:**
- Bachelor’s or Master’s degree in Computer Science, Information Security, or related field.
- Minimum of 8–10 years of experience in information security with a strong focus on AI security.
- Certifications such as CISSP, CEH, OSCP, or equivalent highly desirable.