Prompt Engineering consists of applying specific techniques when constructing a prompt with a goal to improve the quality of the response provided by Large Language Models (LLMs).
The creators of Rephrase and Respond (RaR) took this concept a step further. Instead of searching for a perfect prompt formula, they asked, "What if we let the model rephrase the question itself and then answer it?"
They focused on question-answering tasks and proposed the idea of incorporating a rephrasing request directly into the prompt. This can be as simple as adding one sentence: "Rephrase and expand the question, and respond."
{Your question}
Rephrase and expand the question, and respond.
It's a concise and effective technique that can be combined with other techniques and has demonstrated improvements on multiple benchmarks.
Let's take an example question where we ask a model to answer whether a known person is born in an even/odd month. The ambiguity for the model here lies in the definition of the "even" and "odd" month:
Here's an example:
Was Ludwig van Beethoven born in an even month?
No, Ludwig van Beethoven was born on December 17, 1770. Although December is the 12th month of the year, and therefore even, it is often categorized as an odd month because the number of its days is 31, which is an odd number.
Let's try to ask the model to rephrase:
Was Ludwig van Beethoven born in an even month?
Rephrase and expand the question, and respond.
Rephrased Question: Was Ludwig van Beethoven born during a month with an even number, and if so, which month was it?
Response: Ludwig van Beethoven was born on December 17, 1770. December is the twelfth month of the year, which is an even number. Thus, Beethoven was indeed born in an even month.
Now, it answers correctly.
Apart from adding "Rephrase and expand the question, and respond" to the prompt, you can use many other variations for asking a model to rephrase the question:
All models can benefit from rephrasing questions, with more advanced models expected to gain a larger improvement.
In this example, we provide the model with an initial position of the coin and describe the subsequent actions of people who either flipped or did not flip the coin. Then, we ask the model to provide an answer "Yes" or "No" to the question "Is the coin still heads up?."
Let's compare the two results that we get from a simple prompt and a Rephrase and Respond (RaR) prompt. In this example, the correct answer is "Yes, the coin still heads up."
A coin is heads up. Aluino flips the coin. Arthor flips the coin. Is the coin still heads up? Flip means reverse. Answer the Yes or No question.
No.
This is incorrect. Let's try to ask the model to rephrase:
A coin is heads up. Aluino flips the coin. Arthor flips the coin. Is the coin still heads up? Flip means reverse.
Rephrase and expand the question, and respond.
Answer the Yes or No question.
The coin started out being heads up. Then, Aluino flipped the coin, reversing its side. After that, Arthor also flipped the coin, reversing its side again. Is the coin facing heads up now? Considering the action of flipping the coin means to reverse its side, after two flips, the coin would indeed be back in its original position. So, yes, the coin is still heads up.
Correct!
Another way to use Rephrase and Respond is a two-step procedure, separating the rephasing step from the responding step in RaR:
{question}
Given the above question, rephrase and expand it to help you do better answering. Maintain all information in the original question.
{question}
{rephrased_question}
Interestingly, we can use the same LLM for both steps or different LLMs for each. One of the potential ways to use this is to let stronger LLMs rephrase for weaker LLMs to respond for better results.
Although Few-Shot Chain-of-Thought (CoT) is not included into the Zero-Shot prompting techniques that we study here, it has been the most effective CoT technique for its usage of Few-Shot examples.
That's why we think it's important to mention that Rephrase and Respond (RaR) can be combined with a Few-Shot Chain-of-Thought (CoT) to achieve even better results.
To combine these techniques, again, just add this phrase to the end of the prompt: "Given the above question, rephrase and expand it to help you do better answering. Maintain all information in the original question."
Q: {question 1}
A: {example of the correct answer 1}
Q: {question 2}
A: {example of the correct answer 2}
...
Q: {question n}
Given the above question, rephrase and expand it to help you do better answering. Maintain all information in the original question.
In this example, the task centers on symbolic reasoning. We ask the LLM to concatenate the final letters of a given list of names. Here's how you can combine RaR and CoT for this task:
Q: Take the last letters of the words in "Elon Musk" and concatenate them.
A: The last letter of "Elon" is "n". The last letter of "Musk" is "k". Concatenating them is "nk". The answer is nk.
Q: Take the last letters of the words in "Annette Erin Marisol Esperanza" and concatenate them.
Given the above question, rephrase and expand it to help you do better answering. Maintain all information in the original question.
Rephrased and Expanded Question:
To find the concatenated result of the last letters of each word in the phrase "Annette Erin Marisol Esperanza," follow these steps:
Identify the last letter of each word in the phrase. Concatenate these last letters in the order of the words.
So for the phrase "Annette Erin Marisol Esperanza":
The last letter of "Annette" is "e". The last letter of "Erin" is "n". The last letter of "Marisol" is "l". The last letter of "Esperanza" is "a".
Concatenate these last letters to get the final result. The answer is "enla".
Rephrase and Respond (RaR) is a compelling prompt engineering technique, but it does come with some limitations.
Here are a few potential drawbacks:
RaR is a valuable tool for improving LLM outputs for question-answering tasks, especially when carefully implemented alongside other prompt strategies. By leveraging RaR, you can achieve clearer and more accurate responses.
Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.
Deng, Y., Zhang, W., Chen, Z., & Gu, Q. (2024). Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves. https://arxiv.org/abs/2311.04205 ↩
Allen-Zhu, Z., & Li, Y. (2024). Physics of Language Models: Part 3.2, Knowledge Manipulation. https://arxiv.org/abs/2309.14402 ↩
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. https://arxiv.org/abs/2201.11903 ↩ ↩2