Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🧠 Advanced
🧠 AdvancedThought Generation🟦 Complexity-Based Prompting

🟦 Complexity-Based Prompting

Last updated on October 3, 2024 by Valeriia Kuka
Overview of Complexity-Based Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaperCode
Complexity-Based PromptingUniversity of Edinburgh, Allen Institute for AIJan 2023Complexity-Based Prompting for Multi-Step ReasoningCode

What is Complexity-Based Prompting?

Complexity-Based Prompting1 is a new technique for improving multi-step reasoning in large language models (LLMs). The main idea is to use complex reasoning chains (those with more steps) as examples when prompting the model. This method improves performance in reasoning tasks like math problems and commonsense reasoning by focusing on both input prompts and output selection.

How This Technique Differs from Existing Methods

Standard prompting techniques, such as Chain-of-Thought (CoT) prompting, have shown success in multi-step reasoning by asking LLMs to generate intermediate reasoning steps before providing a final answer. However, it wasn’t clear which types of examples make the best prompts. Complexity-Based Prompting shows that more complex examples, which require more reasoning steps, result in better model performance compared to simpler ones.

The method also applies to output selection: when multiple reasoning chains are generated, the model chooses the majority answer from the more complex reasoning chains. This process, called complexity-based consistency, further boosts accuracy.

  1. Chain-of-Thought (CoT) Prompting: CoT prompting improves multi-step reasoning by breaking down a problem into smaller steps, but complexity-based prompting goes further by selecting examples with more steps, improving overall accuracy.
  2. Self-Consistency: Self-consistency selects the most common answer from multiple reasoning chains, while complexity-based consistency focuses on the most complex and robust reasoning paths, leading to more accurate results.
  3. Annotation Efficiency: Unlike retrieval-based methods that require large-scale annotated datasets, complexity-based prompting can be applied with only a few-shot learning setup, making it more efficient.

Benefits

  • Richness of reasoning: Complex prompts provide a more detailed view of problem-solving, making the model capable of handling a wide range of reasoning tasks.
  • Avoiding shortcuts: Complex reasoning chains prevent the model from relying on shortcuts, which can lead to mistakes, especially in tasks requiring detailed logic.
  • Better generalization: Using complex prompts improves generalization, meaning the model performs better not only on difficult tasks but also on simpler tasks.

How Complexity-Based Prompting Works

  1. Selecting Complex Prompts: In complexity-based prompting, examples with longer reasoning chains (more steps) are chosen as input prompts. The intuition is that complex examples provide richer reasoning patterns, covering a wider range of reasoning skills. These prompts teach the model to handle both simple and complex reasoning cases.

  2. Complexity-Based Consistency: When generating reasoning chains for a new problem, the model produces multiple possible solutions. Instead of selecting the majority answer from all generated chains (as in self-consistency), complexity-based consistency focuses on selecting the majority answer from the most complex reasoning chains. This ensures that the most thorough reasoning processes influence the final decision.

How to Use Complexity-Based Prompting

  1. Select Complex Prompts: When constructing a prompt, choose examples with a higher number of reasoning steps.
  2. Generate Multiple Outputs: Sample multiple reasoning paths from the model for each test question.
  3. Vote Among Complex Chains: Choose the final answer based on the majority from the most complex reasoning paths rather than all paths.
Tip

The code for Complexity-Based Prompting is open-sourced and available for further research and implementation on GitHub.

Results of Complexity-Based Prompting

This method was tested on multiple benchmarks and significantly outperformed existing prompting techniques like CoT and self-consistency, setting new state-of-the-art results.

TaskPrevious SOTAComplex Prompt (Codex)+ Voting Complex
GSM8K (Math)74.4%82.6%82.9%
MultiArith99.3%99.7%99.8%
MathQA37.4%47.3%60.0%
Date Understanding79.2%86.8%N/A
Penguins78.1%80.8%N/A
  • Substantial Improvement: Complexity-based prompting improves performance by an average of +5.3% and up to +18% on benchmarks like MathQA.
  • Efficient: Requires fewer examples to train while achieving better results.
  • Robust: Works consistently across various tasks, including math, temporal reasoning, and referential tasks.

Conclusion

Complexity-Based Prompting offers a straightforward yet highly effective method for improving multi-step reasoning in large language models. By selecting prompts and outputs based on the complexity of reasoning chains, this method significantly enhances accuracy across multiple benchmarks.

Footnotes

  1. Fu, Y., Peng, H., Sabharwal, A., Clark, P., & Khot, T. (2023). Complexity-Based Prompting for Multi-step Reasoning. In The Eleventh International Conference on Learning Representations . https://openreview.net/forum?id=yf1icZHC-l9

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.