Introduction to Prompt Hacking
•
2 Days
Learn about the basics of Prompt Hacking, one of the biggest vulnerabilities in Large Language Models (LLMs), and Prompt Defense techniques.
Taught by
Course Overview
Learn about the basics of Prompt Hacking, one of the biggest vulnerabilities in Large Language Models (LLMs), and Prompt Defense techniques.
What will you learn?
Prompt Hacking
How to Prompt Inject and Jailbreak Large Language Models.
Skills you will gain
Meet Your Instructors
Sander Schulhoff
Founder & CEO, Learn Prompting
As a researcher, he has authored multiple award-winning papers alongside experts from OpenAI, Microsoft, ScaleAI, the Federal Reserve, HuggingFace, and others. His paper on HackAPrompt was awarded Best Paper at EMNLP 2023 (selected out of 20,000 submitted papers), cited by OpenAI in their paper on mitigating prompt injections, and used by teams at OpenAI, Amazon, and ArthurAI. His most recent research paper, The Prompt Report, co-authored with researchers from OpenAI and Microsoft, has quickly become the latest foundational prompting paper in the field, just weeks after its launch.
Fady Yanni
Co-founder & COO at Learn Prompting
Fady Yanni is the Founder & COO of Learn Prompting, the leading Prompt Engineering resource which has taught over 1 million people how to effectively communicate with AI. Previously, he was the Head of Fundraising at the Farama Foundation, the open-source maintainers of every major Reinforcement Learning library, including OpenAI's flagship project, Gym.
Course Syllabus
Introduction
What is Prompt Hacking?
What is the difference between Prompt Hacking and Jailbreaking?
Introduction to Prompt Injection Attacks
What is Prompt Injection?
Potential Threats
How we get Prompt Injected
Preventing Injections in LLMs
Not Trusting User Input
Post-prompting and the Sandwich Defense
Few-Shot Prompting Defense
Non-Prompt-based Techniques
Other Prompt Hacking Concepts
Prompt Leaking
Jailbreaking