Designed For Advanced Professionals

AI Red-Teaming and Security Master Class

Learn AI Security from the Creator of HackAPrompt, the largest AI Security competition ever held, backed by OpenAI.

The fundamentals of AI security, from prompt injection to model extraction
Real-world examples and exercises from HackAPrompt competition
Practical techniques to secure AI systems against emerging threats
Industry best practices for implementing AI security measures

Our AI Systems Are Vulnerable... Learn how to Secure Them!

In 2023, I partnered with OpenAI, ScaleAI, & Hugging Face to launch HackAPrompt—the 1st & Largest Generative AI Red Teaming Competition ever held. Over 3,300 AI hackers competed to bypass model guardrails using prompt injections—the #1 Security Risk in LLMs.
We collected the Largest Dataset of Prompt Injection attacks, which has been used by every major Frontier AI Lab, including OpenAI, who used it to improve their models' resistance to Prompt Injection Attacks by up to 46%.
Today, I've delivered workshops on AI Red Teaming & Prompting at OpenAI, Microsoft, Deloitte, & Stanford University. And because I love to teach... I created this course to teach you everything I know about AI Red Teaming!

About the Course

This 6-week Masterclass is the #1 AI Security course for Cybersecurity Professionals, AI Trust & Safety leads, AI product managers, and engineers who want to master AI Red Teaming and secure AI systems against real-world threats. You'll gain hands-on experience identifying vulnerabilities in Generative AI systems, including prompt injections, jailbreaks, and adversarial attacks. Using the HackAPrompt playground, you'll practice both attacking and defending AI systems—learning how to break them and how to secure them. This course is practical, not just theoretical. You'll work on real-world projects, analyzing live AI systems for vulnerabilities. These projects prepare you for the AI Red Teaming Certified Professional (AIRTP+) Exam, a 24+ hour assessment that validates your AI security expertise. Our last cohort included 150 professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart. Graduates passed the certification exam and are now AIRTP+ certified, applying their skills to secure AI systems worldwide.

About Your Instructor

I'm Sander Schulhoff, the Founder of Learn Prompting & HackAPrompt. In October 2022, I published the 1st Prompt Engineering Guide on the Internet—two months before ChatGPT was released. Since then, my courses have trained over 3 million people, and I'm one of two people (other than Andrew Ng) to partner with OpenAI on a ChatGPT course. I've led Prompt Engineering workshops at OpenAI, Microsoft, Stanford, Deloitte, and Dropbox. I'm an award-winning Generative AI researcher from the University of Maryland and the youngest recipient of the Best Paper Award at EMNLP 2023, the leading NLP Conference, selected out of 20,000 submitted research papers. My research paper on HackAPrompt, Ignore This Title and HackAPrompt, has been by cited by OpenAI in three major research papers: Instruction Hierarchy, Automated Red Teaming, and Adversarial Robustness papers. I created HackAPrompt, the first and largest Generative AI Red Teaming competition. Most recently, I led a team from OpenAI, Microsoft, Google, and Stanford on The Prompt Report—the most comprehensive study on Prompt Engineering to date. This 76-page survey analyzed over 1,500 academic papers, evaluating the effectiveness of prompting techniques, AI agents, and Generative AI applications.

Expert Guest Instructors

  • Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public! Pliny also jailbroke an AI agent to autonomously sign into Gmail, code ransomware, compress it into a zip file, write a phishing email, attach the payload, and successfully deliver it to a target
  • Johann Rehberger: Led the creation of a Red Team in Microsoft Azure as a Principal Security Engineering Manager and built Uber's Red Team. Johann discovered attack vectors like ASCII Smuggling and AI-powered C2 (Command and Control) attacks. He has also found Bug Bounties in OpenAI's ChatGPT, Microsoft Copilot, GitHub Copilot Chat, Anthropic Claude, and Google Bard/Gemini. Johann will be sharing unreleased research that he hasn't yet published on his blog, embracethered.com.
  • Joseph Thacker: Principal AI Engineer at AppOmni, leading AI research on agentic functionality and retrieval systems. A security researcher specializing in application security and AI, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd. He hacked into Google Bard at their LLM Bug Bounty event and took 1st place in the competition.
  • Akshat Parikh: Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan's Bug Bounty Hall of Fame and Top 250 in Google's Bug Bounty Hall of Fame—all by the age of 16.
  • Richard Lundeen: Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit. He leads an interdisciplinary team of red teamers, ML researchers, and developers focused on securing AI systems.
  • Sandy Dunn: A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance

Limited-Time Offer

Enroll now and get complimentary access to Learn Prompting Plus and our AI Red Teaming Certification Exam (a $717 value). You'll get access to over 15 comprehensive courses—including this masterclass and additional courses in Prompt Engineering, Prompt Hacking, & AI/ML Red-Teaming, and a voucher for our AI Red Teaming Professional Certificate Exam (AIRTP+).

Money-Back Guarantee

We genuinely want this course to be transformative for you. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy. We're confident in the value we provide and stand by our promise to help you level up your AI security expertise.
Interested in an enterprise license so your whole team or company can take the course? Please reach out directly to [email protected]

Who Should Attend

Security Professionals

Enhance your skill set with AI-specific security knowledge.

AI Engineers

Learn to build secure AI systems and protect against threats.

Red Team Members

Add AI security testing to your capabilities.

Business Leaders

Understand the security implications of AI adoption.

What You'll Learn

The fundamentals of AI security, from prompt injection to model extraction

Real-world examples and exercises from HackAPrompt competition

Practical techniques to secure AI systems against emerging threats

Industry best practices for implementing AI security measures

Meet Your Instructor

Sander Schulhoff

Sander Schulhoff

Founder & CEO, Learn Prompting
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, which reached 3 million people and taught them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done. This 76-page survey, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions, analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Why Choose This Masterclass

Comprehensive Curriculum

Learn the fundamentals of AI security, from prompt injection to model extraction

Hands-on Practice

Real-world examples and exercises from HackAPrompt competition

Industry Recognition

Earn a certificate backed by leading AI companies

Ready to Secure Your AI Systems?

Join our AI Security Masterclass and learn the skills you need to protect your organization.

Enroll Today

© 2025 Learn Prompting. All rights reserved.