Compete in HackAPrompt 2.0, the world's largest AI Red-Teaming competition!

Check it out →
프롬프트 엔지니어링 가이드
😃 기초
💼 기본 애플리케이션
🧙‍♂️ 중급
🤖 자치령 대표
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🎲 Miscellaneous
📚 Bibliography
Resources
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
💪 Prompt TuningInterpretable Soft Prompts

Interpretable Soft Prompts

Reading Time: 2 minutes
Last updated on August 7, 2024

Sander Schulhoff

Soft prompts are a sequence of vectors which don't correspond to any actual tokens in the vocabulary. This makes it difficult to interpret the prompt. However, we can still attempt to do so by mapping the vectors to the closest tokens in the vocabulary. However, projected soft prompts are often wayward; they can solve tasks well, but get projected to arbitrary tokens in the vocabulary.

For example, if we are training on math questions like GSM8K, we might start with the prompt You are a mathematician. Solve this question:. If we perform prompt tuning on it, then project that back into tokenspace, we might be left with something nonsensical like A bus is a bus. Do thing here:. It is often the case that the soft prompt which maps to this nonsensical prompt can provide better performance on the task!

The Waywardness Hypothesis

Khashabi et al. propose this incredible hypothesis. It says that given a task, for any discrete target prompt, there exists a continuous prompt that projects to it, while performing well on the task.

This means that given 1000 different tasks, there exist 1000 different performant soft prompts (one for each task) which map to the same discrete prompt.

Interpretability Risks

They use the Waywardness Hypothesis to highlight a number of risks which arise when interpreting soft prompts. In particular, a soft prompt can be projected to a discrete prompt which gives a misleading intent.

Consider a soft prompt for ranking resumes. When projected into tokenspace, it might be You hiring manager. Rank good resumes:. This seems decent, perhaps a bit lacking in grammaticality. However, the token good might have a similar projection as the token for white, and there could exist implicit bias in the prompt. Using a slightly different projection method, we could end up with You hiring manager. Rank white resumes:. This is obviously quite different, and could have significant implications.

Similarly to interpreting a regular discrete prompt, we should be extremely conscious of the biases which might be present in the prompt. We must be especially careful with soft prompts, as they are more difficult to interpret.

Sander Schulhoff

Sander Schulhoff is the CEO of HackAPrompt and Learn Prompting. He created the first Prompt Engineering guide on the internet, two months before ChatGPT was released, which has taught 3 million people how to prompt ChatGPT. He also partnered with OpenAI to run the first AI Red Teaming competition, HackAPrompt, which was 2x larger than the White House's subsequent AI Red Teaming competition. Today, HackAPrompt partners with the Frontier AI labs to produce research that makes their models more secure. Sander's background is in Natural Language Processing and deep reinforcement learning. He recently led the team behind The Prompt Report, the most comprehensive study of prompt engineering ever done. This 76-page survey, co-authored with OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions, analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Khashabi, D., Lyu, S., Min, S., Qin, L., Richardson, K., Welleck, S., Hajishirzi, H., Khot, T., Sabharwal, A., Singh, S., & Choi, Y. (2021). Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts. 2

  2. Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., & Schulman, J. (2021). Training Verifiers to Solve Math Word Problems.