AI Penetration Tester - Remote
Job Title : AI Penetration Tester
Location : Remote
Position Type : Contract
Position Summary / Purpose
The AI Penetration Tester will be responsible for executing advanced offensive security assessments focused on systems incorporating Artificial Intelligence and Machine Learning. This role contributes to the organization's security posture by identifying vulnerabilities in AI-powered applications, models, and architectures, and by advising on secure AI development and deployment practices. The position works closely with engineering, security, and red team functions to ensure AI solutions are resilient against both traditional and AI-specific threats.
Key Responsibilities
• Execute AI-focused penetration testing engagements, including manual testing of AI/ML-enabled systems, objective-based testing of AI-driven features, and assessment of both traditional and AI-centric attack surfaces.
• Perform threat modeling for AI-powered software systems, evaluate AI-related business logic, and conduct architecture and design reviews.
• Identify and exploit adversarial machine learning vectors, prompt-based vulnerabilities, model manipulation risks, and other AI-specific security weaknesses.
• Develop, enhance, and leverage AI-driven tools and methodologies for offensive security activities such as reconnaissance, exploitation, fuzzing, and adversarial ML testing across web applications, APIs, and mobile platforms.
• Present penetration testing findings clearly to both technical and non-technical stakeholders, including conducting live demonstrations of AI vulnerabilities when required.
• Collaborate with engineering, development, and security teams to communicate findings, lead remediation discussions, and advise on secure AI model development, training, and deployment best practices.
• Research emerging AI attack techniques, evaluate their real-world impact, and provide actionable recommendations to strengthen organizational AI defenses.
• Partner with internal Red Teams, SOC analysts, and AI security researchers to share insights, refine AI red teaming methodologies, and integrate new adversarial ML techniques and proven exploitation tactics.
• Independently manage AI penetration testing engagements end-to-end, from planning and execution through reporting, with minimal supervision.
Qualifications
Required Experience & Skills
• Minimum of 3 years of recent penetration testing experience focused on APIs, web applications, and mobile applications.
• Hands-on experience or strong exposure to AI model testing, AI security, or adversarial machine learning.
• Proven background in AI red teaming and adversarial attack development, including prompt-injection attacks, LLM vulnerability analysis, and model evasion techniques.
• Proficiency with penetration testing tools such as Burp Suite Pro, Netsparker, Checkmarx, and familiarity with AI/ML frameworks and platforms such as TensorFlow, PyTorch, LLM APIs, and LangChain.
• Strong written and verbal communication skills, with the ability to clearly explain AI-related security risks and remediation strategies to both technical and non-technical audiences.
Certifications & Education
• One or more recognized ethical hacking certifications (e.g., GWAPT, CREST, OSWE, OSWA) preferred.
• Certifications or formal training in AI security or adversarial ML techniques are highly desirable.
• Bachelor's degree from an accredited college or university, or equivalent relevant industry experience
Apply tot his job
Apply To this Job