Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedThought Generation🟦 Step-Back Prompting

🟦 Step-Back Prompting

🟦 This article is rated medium
Reading Time: 6 minutes

Last updated on October 3, 2024

Overview of Step-Back Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaper
Step-Back PromptingGoogle DeepMindOct 2023Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models

What is Step-Back Prompting?

Step-Back Prompting is a technique designed to improve Large Language Models' (LLMs) ability to solve complex tasks by encouraging them to "step back" and focus on abstract principles before reasoning through details. The method works by prompting the model to first derive high-level concepts and first principles before tackling specific details. This abstraction helps prevent errors in intermediate steps and leads to more accurate reasoning.

Step-Back Prompting improves performance across reasoning-intensive tasks, such as STEM (science, technology, engineering, math), Knowledge-based QA, and Multi-Hop Reasoning. It has been tested on models like PaLM-2L, GPT-4, and Llama2-70B, showing significant improvements over traditional methods like Chain-of-Thought (CoT) Prompting.

How Step-Back Prompting Works

The method involves two steps:

  1. Abstraction: The model is prompted to focus on a higher-level concept or principle related to the question.
  2. Reasoning: Once the high-level abstraction is retrieved, the model uses it to reason through the specifics of the original question.

For example:

  • Original Question: "What happens to the pressure of an ideal gas if the temperature is increased by a factor of 2 and the volume is increased by a factor of 8?"
  • Step-Back Abstraction: "What are the principles involved in this problem?" (Ideal Gas Law)
  • Final Answer: Using the Ideal Gas Law, the model can calculate the correct answer: the pressure decreases by a factor of 4.

How Step-Back Prompting Differs from Existing Techniques

  1. Chain-of-Thought (CoT) Prompting: CoT prompts the model to solve problems step-by-step in a linear sequence. While this helps break down complex problems, it can lead to mistakes when intermediate steps are faulty. Step-Back Prompting, on the other hand, first asks the model to abstract high-level concepts before addressing specifics, reducing errors caused by focusing too soon on details.

  2. Take a Deep Breath (TDB) Prompting: TDB prompts models to slow down and carefully process each step, but like CoT, it still works linearly. Step-Back Prompting is different because it prompts the model to first generalize the problem and think at a higher level before proceeding.

  3. Decomposition Prompting: Decomposition breaks a problem into smaller sub-problems. Step-Back Prompting differs by rephrasing the original question into a more general concept, which helps avoid narrow, low-level reasoning errors. It abstracts instead of decomposing.

How to Use Step-Back Prompting

Consider a question from physics:

Astronaut

Original Question


What happens to the pressure of an ideal gas when temperature increases by a factor of 2, and volume increases by a factor of 8?

Let's apply Step-Back Prompting to it by following these steps:

1. Abstraction Step

Instead of asking the model to give the answer to the original question, we prompt it with a step-back question. It is a derived question from the original question at a higher level of abstraction.

Astronaut

Step-Back Question


What are the fundamental concepts or principles involved in this problem?

Step-Back question is created manually by identifying the core concept behind the question and prompt the model to retrieve relevant facts or principles.

Robot

AI Output


The Ideal Gas Law: PV = nRT, , where P is the pressure, V is the volume, n is the number of moles, R is the gas constant, and T is the temperature.

2. Reasoning Step

After retrieving the principle, guide the model to apply it to solve the original problem.

Astronaut

Reasoning Step


Use the Ideal Gas Law: PV = nRT to answer the question: What happens to the pressure of an ideal gas when temperature increases by a factor of 2, and volume increases by a factor of 8?

Robot

AI Output


Apply Ideal gas law to find out the pressure.

If the temperature is increased by a factor of 2, then T becomes 2T. If the volume is increased by a factor of 8, then V becomes 8V.

Substituting these values into the ideal gas law, we get: P(8V) = nR(2T)

Dividing both sides by 8V, we get: P = nR(2T) / 8V

We can see that the pressure has decreased by a factor of 4.

Results of Step-Back Prompting

Step-Back Prompting has demonstrated impressive results across various tasks, including STEM reasoning, knowledge-based question answering, and multi-hop reasoning tasks. Below are performance gains observed with Step-Back Prompting compared to baseline models and other prompting techniques:

TaskModelBaselineStep-Back Improvement
MMLU (Physics)PaLM-2L66.4%+7%
MMLU (Chemistry)PaLM-2L70.9%+11%
TimeQAPaLM-2L41.5%+27%
MuSiQuePaLM-2L35.5%+7%
StrategyQAPaLM-2L82.8%+3.6%
  • Substantial Performance Gains: Step-Back Prompting outperforms Chain-of-Thought and Take-a-Deep-Breath prompting methods, with improvements ranging from 7% to 27% depending on the task.
  • Error Reduction: Step-Back Prompting reduces reasoning errors by focusing first on high-level abstractions before diving into specifics.
  • Robustness: The method is model-agnostic, showing improvements on multiple LLMs (PaLM-2L, GPT-4, and Llama2-70B).

Conclusion

Step-Back Prompting is a powerful technique for improving complex reasoning tasks in Large Language Models by first prompting for abstraction before diving into detailed reasoning. This method has shown significant improvements in performance across various domains, making it a valuable tool for enhancing LLM capabilities.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Zheng, H. S., Mishra, S., Chen, X., Cheng, H.-T., Chi, E. H., Le, Q. V., & Zhou, D. (2024). Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models. https://arxiv.org/abs/2310.06117 ↩

Copyright Β© 2024 Learn Prompting.