Last updated on November 7, 2024
Large language models (LLMs) are trained on extensive text data, making them a rich source of information. However, the way you phrase a question (or prompt) can impact the response accuracy, even if the model "knows" the answer. In some cases, small changes in wording can make a big difference. For example:
Obama is a __
good person
Obama is a __ by profession
politician
Despite being similar, only the second prompt yields a correct response.
In this document, we'll talk about Prompt Paraphrasing, a technique to create more varied high-quality prompts for a specific LLM:
Prompt Paraphrasing is a technique used to generate multiple high-quality prompts that retrieve more accurate answers from the model. It takes an initial "seed" prompt and creates several semantically similar versions.
For instance, starting with "x shares a border with y" might yield:
Prompt Paraphrasing uses the LLM itself to generate paraphrased prompts. This way, it leverages the language patterns and templates the model has "learned" best during training.
A simple paraphrasing method involves back-translation. Here’s how it works:
If you want to find the CEO of a company, you may start with an initial prompt having the structure: "Who is the CEO of company_name?". An example prompt can be: Who is the CEO of Apple?
Then, you can generate multiple prompt candidates using translation. Say, you want to generate 2 different prompt candidates.
Convert the following into spanish: Who is the CEO of Apple?
¿Quién es el CEO de Apple?
Convert the following into French: Who is the CEO of Apple?
Qui est le PDG d'Apple ?
Convert the following sentence into English: ¿Quién es el CEO de Apple?
Who is the executive head of Apple?
Convert the following sentence into English: Qui est le PDG d'Apple ?
Name the current CEO of Apple.
After paraphrasing, you now have a set of prompts:
After generating several prompts, how do you select the best one? There are two primary approaches:
Studies show that paraphrasing enhances model performance. For example:
Prompt | Model | Top1 | Top3 | Top5 |
---|---|---|---|---|
Manual | BERT-base | 22.8 | - | - |
Paraphrased | BERT-base | 22.8 | 23.8 | 24.6 |
Manual | BERT-large | 25.7 | - | - |
Paraphrased | BERT-large | 25.9 | 27.8 | 28.3 |
Large language models contain vast amounts of knowledge, but phrasing matters. By using prompt paraphrasing, you can create prompts that maximize the retrieval of relevant information, improving model responses for specific factual queries.
Jiang, Z., Xu, F. F., Araki, J., & Neubig, G. (2019). How Can We Know What Language Models Know? https://arxiv.org/abs/1911.12543 ↩ ↩2 ↩3