In 2023, I partnered with OpenAI, ScaleAI, & Hugging Face to launch HackAPrompt—the 1st & Largest Generative AI Red Teaming Competition ever held. Over 3,300 AI hackers competed to bypass model guardrails using prompt injections—the #1 Security Risk in LLMs.
We collected the Largest Dataset of Prompt Injection attacks, which has been used by every major Frontier AI Lab, including OpenAI, who used it to improve their models' resistance to Prompt Injection Attacks by up to 46%.
Today, I've delivered workshops on AI Red Teaming & Prompting at OpenAI, Microsoft, Deloitte, & Stanford University. And because I love to teach... I created this course to teach you everything I know about AI Red Teaming!
About the Course
This 10-day program is the #1 AI Security crash-course for Cybersecurity Professionals, AI Trust & Safety leads, AI product managers, and engineers who want to master AI Red Teaming and secure AI systems against real-world threats.
You'll gain hands-on experience identifying vulnerabilities in Generative AI systems, including prompt injections, jailbreaks, and adversarial attacks. Using the HackAPrompt playground, you'll practice both attacking and defending AI systems—learning how to break them and how to secure them.
This course is practical, not just theoretical. You'll work on real-world projects, analyzing live AI systems for vulnerabilities. These projects prepare you for the AI Red Teaming Certified Professional (AIRTP+) Exam, a 24+ hour assessment that validates your AI security expertise.
Our last cohort included 150 professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart. Graduates passed the certification exam and are now AIRTP+ certified, applying their skills to secure AI systems worldwide.
About Your Instructor
I'm Sander Schulhoff, the Founder of Learn Prompting & HackAPrompt. In October 2022, I published the 1st Prompt Engineering Guide on the Internet—two months before ChatGPT was released. Since then, my courses have trained over 3 million people, and I'm one of two people (other than Andrew Ng) to partner with OpenAI on a ChatGPT course. I've led Prompt Engineering workshops at OpenAI, Microsoft, Stanford, Deloitte, and Dropbox.
I'm an award-winning Generative AI researcher from the University of Maryland and the youngest recipient of the Best Paper Award at EMNLP 2023, the leading NLP Conference, selected out of 20,000 submitted research papers. My research paper on HackAPrompt, Ignore This Title and HackAPrompt, has been by cited by OpenAI in three major research papers: Instruction Hierarchy, Automated Red Teaming, and Adversarial Robustness papers.
I created HackAPrompt, the first and largest Generative AI Red Teaming competition. Most recently, I led a team from OpenAI, Microsoft, Google, and Stanford on The Prompt Report—the most comprehensive study on Prompt Engineering to date. This 76-page survey analyzed over 1,500 academic papers, evaluating the effectiveness of prompting techniques, AI agents, and Generative AI applications.
Limited-Time Offer
Enroll now and get complimentary access to Learn Prompting Plus and our AI Red Teaming Certification Exam (a $717 value). You'll get access to over 15 comprehensive courses—including this masterclass and additional courses in Prompt Engineering, Prompt Hacking, & AI/ML Red-Teaming, and a voucher for our AI Red Teaming Professional Certificate Exam (AIRTP+).
Money-Back Guarantee
We genuinely want this course to be transformative for you. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy. We're confident in the value we provide and stand by our promise to help you level up your AI security expertise.
Interested in an enterprise license so your whole team or company can take the course? Please reach out directly to
[email protected]