Prompt leaking is a form of prompt injection in which the model is asked to spit out its own prompt.
As shown in the example image1 below, the attacker changes user_input
to attempt to return the prompt. The intended goal is distinct from goal hijacking (normal prompt injection), where the attacker changes user_input
to print malicious instructions1.
The following image2, again from the remoteli.io
example, shows
a Twitter user getting the model to leak its prompt.
Well, so what? Why should anyone care about prompt leaking?
Sometimes people want to keep their prompts secret. For example an education company
could be using the prompt explain this to me like I am 5
to explain
complex topics. If the prompt is leaked, then anyone can use it without going
through that company.
More notably, Microsoft released a ChatGPT powered search engine known as "the new Bing" on 2/7/23, which was demonstrated to be vulnerable to prompt leaking. The following example by @kliu128 demonstrates how given an earlier version of Bing Search, code-named "Sydney", was susceptible when giving a snippet of its prompt3. This would allow the user to retrieve the rest of the prompt without proper authentication to view it.
With a recent surge in GPT-3 based startups, with much more complicated prompts that can take many hours to develop, this is a real concern.
Try to leak the following prompt4 by appending text to it:
Prompt leaking is an important concept to understand because the risk of uninentionally exposing sensitive prompts reveals a critical vulnerability in AI-based systems. As more and more businesses rely on language model features, addressing prompt leaking will be crucial to protecting confidential intellectual property.
Having an LLM reveal its original prompt is crucial in the case that these developer instructions should be kept confidential. Prompt leaking can undermine efforts to create unique prompts and can potentially expose a business's intellectual property.
As demonstrated in the article, prompt leaking can be dangerous to real companies including Microsoft, whose early Bing chatbot was susceptible to clever inputs that pushed it to unveil its original instructions.
Perez, F., & Ribeiro, I. (2022). Ignore Previous Prompt: Attack Techniques For Language Models. arXiv. https://doi.org/10.48550/ARXIV.2211.09527 ↩ ↩2
Willison, S. (2022). Prompt injection attacks against GPT-3. https://simonwillison.net/2022/Sep/12/prompt-injection/ ↩
Liu, K. (2023). The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.). https://twitter.com/kliu128/status/1623472922374574080 ↩
Chase, H. (2022). adversarial-prompts. https://github.com/hwchase17/adversarial-prompts ↩
Sign up and get the latest AI news, prompts, and tools.
Join 30,000+ readers from companies like OpenAI, Microsoft, Google, Meta and more!