Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
πŸ”“ Prompt Hacking🟒 Prompt Leaking

Prompt Leaking

🟒 This article is rated easy
Reading Time: 2 minutes

Last updated on August 7, 2024

Takeaways
  • Definition: Prompt leaking is when a model reveals its internal instructions, generally due to users manipulating inputs to extract the original prompt.
  • Risks: Leaking prompts can expose sensitive information and intellectual property, undermining confidentiality and business integrity.

What is Prompt Leaking?

Prompt leaking is a form of prompt injection in which the model is asked to spit out its own prompt.

As shown in the example image below, the attacker changes user_input to attempt to return the prompt. The intended goal is distinct from goal hijacking (normal prompt injection), where the attacker changes user_input to print malicious instructions.

The following image, again from the remoteli.io example, shows a Twitter user getting the model to leak its prompt.

Well, so what? Why should anyone care about prompt leaking?

Sometimes people want to keep their prompts secret. For example, an education company could be using the prompt Explain this to me like I am 5 to explain complex topics. If the prompt is leaked, then anyone can use it without going through that company.

A Real-World Example of Prompt Leaking: Microsoft Bing Chat

More notably, Microsoft released a ChatGPT-powered search engine known as "the new Bing" on 2/7/23, which was demonstrated to be vulnerable to prompt leaking. The following example by @kliu128 demonstrates how given an earlier version of Bing Search, code-named "Sydney", was susceptible when giving a snippet of its prompt. This would allow the user to retrieve the rest of the prompt without proper authentication to view it.

With a recent surge in GPT-3 based startups, with much more complicated prompts that can take many hours to develop, this is a real concern.

Practice

Try to leak the following prompt by appending text to it:

Conclusion

Prompt leaking is an important concept to understand because the risk of unintentionally exposing sensitive prompts reveals a critical vulnerability in AI-based systems. As more and more businesses rely on language model features, addressing prompt leaking will be crucial to protecting confidential intellectual property.

FAQ

Why is it important to prevent prompt leaking?

Having an LLM reveal its original prompt is crucial in the case that these developer instructions should be kept confidential. Prompt leaking can undermine efforts to create unique prompts and can potentially expose a business's intellectual property.

What is a real-world example of prompt leaking?

As demonstrated in the article, prompt leaking can be dangerous to real companies including Microsoft, whose early Bing chatbot was susceptible to clever inputs that pushed it to unveil its original instructions.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Perez, F., & Ribeiro, I. (2022). Ignore Previous Prompt: Attack Techniques For Language Models. arXiv. https://doi.org/10.48550/ARXIV.2211.09527 ↩ ↩2

  2. Willison, S. (2022). Prompt injection attacks against GPT-3. https://simonwillison.net/2022/Sep/12/prompt-injection/ ↩

  3. Liu, K. (2023). The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.). https://twitter.com/kliu128/status/1623472922374574080 ↩

  4. Chase, H. (2022). adversarial-prompts. https://github.com/hwchase17/adversarial-prompts ↩

Copyright Β© 2024 Learn Prompting.