プロンプトエンジニアリングガイド
😃 基礎編
💼 アプリケーション基礎
🧙‍♂️ 中級
🤖 エージェント
⚖️ 信頼性
🖼️ 画像プロンプティング
🔓 プロンプトハッキング
🔨 ツール
💪 プロンプトチューニング
🎲 その他
📙 Vocabulary Reference
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
💪 プロンプトチューニングSoft Prompts

Soft Prompts

Reading Time: 2 minutes
Last updated on August 7, 2024

Sander Schulhoff

Prompt tuning, an alternative to model fine tuning, freezes the model weights, and updates the parameters of a prompt. The resultant prompt is a 'soft prompt'.

Model Tuning vs Prompt Tuning (Lester et al.)

The above image contrasts model tuning with prompt tuning. In model tuning, you finetune the same model on different tasks. This gives you a few different models, with which you can't necessarily batch inputs easily.

On the other hand, prompt tuning lets you use the same model for all tasks. You just need to append the proper prompts at inference time, which makes batching across different tasks easier. This is pretty much the same advantage that regular prompting has. Additionally, soft prompts trained for a single model across multiple tasks will often be of the same token length.

How it works

To understand the basic logic behind soft prompting, let's think about how model inference works on a given prompt: What's 2+2?.

  1. It might be tokenized as What, 's, 2, +, 2, ?.

  2. Then, each token will be converted to a vector of values.

  3. This vectors of values can be considered as model parameters. The model can be further trained, only adjusting the weights of these prompts.

Note that as soon as we start updating these weights, the vectors of the tokens no longer correspond to actual embeddings from the vocabulary.

Results

Prompt tuning performs better with larger models. Larger models also require less soft prompt tokens. Regardless, more than 20 tokens does not yield significant performance gains.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Lester, B., Al-Rfou, R., & Constant, N. (2021). The Power of Scale for Parameter-Efficient Prompt Tuning.

  2. Khashabi, D., Lyu, S., Min, S., Qin, L., Richardson, K., Welleck, S., Hajishirzi, H., Khot, T., Sabharwal, A., Singh, S., & Choi, Y. (2021). Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts.