AI Red-Teaming and Security Master Class
Learn AI Security from the Creator of HackAPrompt, the largest AI Security competition ever held, backed by OpenAI.
Our AI Systems Are Vulnerable... Learn how to Secure Them!
About the Course
About Your Instructor
Expert Guest Instructors
- Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public! Pliny also jailbroke an AI agent to autonomously sign into Gmail, code ransomware, compress it into a zip file, write a phishing email, attach the payload, and successfully deliver it to a target
- Johann Rehberger: Led the creation of a Red Team in Microsoft Azure as a Principal Security Engineering Manager and built Uber's Red Team. Johann discovered attack vectors like ASCII Smuggling and AI-powered C2 (Command and Control) attacks. He has also found Bug Bounties in OpenAI's ChatGPT, Microsoft Copilot, GitHub Copilot Chat, Anthropic Claude, and Google Bard/Gemini. Johann will be sharing unreleased research that he hasn't yet published on his blog, embracethered.com.
- Joseph Thacker: Principal AI Engineer at AppOmni, leading AI research on agentic functionality and retrieval systems. A security researcher specializing in application security and AI, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd. He hacked into Google Bard at their LLM Bug Bounty event and took 1st place in the competition.
- Akshat Parikh: Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan's Bug Bounty Hall of Fame and Top 250 in Google's Bug Bounty Hall of Fame—all by the age of 16.
- Richard Lundeen: Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit. He leads an interdisciplinary team of red teamers, ML researchers, and developers focused on securing AI systems.
- Sandy Dunn: A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance
Limited-Time Offer
Money-Back Guarantee
Who Should Attend
Security Professionals
Enhance your skill set with AI-specific security knowledge.
AI Engineers
Learn to build secure AI systems and protect against threats.
Red Team Members
Add AI security testing to your capabilities.
Business Leaders
Understand the security implications of AI adoption.
What You'll Learn
The fundamentals of AI security, from prompt injection to model extraction
Real-world examples and exercises from HackAPrompt competition
Practical techniques to secure AI systems against emerging threats
Industry best practices for implementing AI security measures
Meet Your Instructor
Sander Schulhoff
Why Choose This Masterclass
Comprehensive Curriculum
Learn the fundamentals of AI security, from prompt injection to model extraction
Hands-on Practice
Real-world examples and exercises from HackAPrompt competition
Industry Recognition
Earn a certificate backed by leading AI companies
Ready to Secure Your AI Systems?
Join our AI Security Masterclass and learn the skills you need to protect your organization.
Enroll Today