Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🔓 Prompt Hacking🟢 Prompt Leaking

🟢 Prompt Leaking

Last updated on August 7, 2024 by Sander Schulhoff

Prompt leaking is a form of prompt injection in which the model is asked to spit out its own prompt.

As shown in the example image1 below, the attacker changes user_input to attempt to return the prompt. The intended goal is distinct from goal hijacking (normal prompt injection), where the attacker changes user_input to print malicious instructions1.

The following image2, again from the remoteli.io example, shows a Twitter user getting the model to leak its prompt.

Well, so what? Why should anyone care about prompt leaking?

Sometimes people want to keep their prompts secret. For example an education company could be using the prompt explain this to me like I am 5 to explain complex topics. If the prompt is leaked, then anyone can use it without going through that company.

Microsoft Bing Chat

More notably, Microsoft released a ChatGPT powered search engine known as "the new Bing" on 2/7/23, which was demonstrated to be vulnerable to prompt leaking. The following example by @kliu128 demonstrates how given an earlier version of Bing Search, code-named "Sydney", was susceptible when giving a snippet of its prompt3. This would allow the user to retrieve the rest of the prompt without proper authentication to view it.

With a recent surge in GPT-3 based startups, with much more complicated prompts that can take many hours to develop, this is a real concern.

Practice

Try to leak the following prompt4 by appending text to it:

Footnotes

  1. Perez, F., & Ribeiro, I. (2022). Ignore Previous Prompt: Attack Techniques For Language Models. arXiv. https://doi.org/10.48550/ARXIV.2211.09527 2

  2. Willison, S. (2022). Prompt injection attacks against GPT-3. https://simonwillison.net/2022/Sep/12/prompt-injection/

  3. Liu, K. (2023). The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.). https://twitter.com/kliu128/status/1623472922374574080

  4. Chase, H. (2022). adversarial-prompts. https://github.com/hwchase17/adversarial-prompts

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.