Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedThought Generation🟦 Complexity-Based Prompting

🟦 Complexity-Based Prompting

🟦 This article is rated medium
Reading Time: 6 minutes

Last updated on October 3, 2024

Overview of Complexity-Based Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaperCode
Complexity-Based PromptingUniversity of Edinburgh, Allen Institute for AIJan 2023Complexity-Based Prompting for Multi-Step ReasoningFranxYao/Complexity-Based-Prompting

What is Complexity-Based Prompting?

Complexity-Based Prompting is a new technique for improving multi-step reasoning in Large Language Models (LLMs). The main idea is to use complex reasoning chains (those with more steps) as examples when prompting the model. This method improves performance in reasoning tasks like math problems and commonsense reasoning by focusing on both input prompts and output selection.

How This Technique Differs from Existing Methods

Standard prompting techniques, such as Chain-of-Thought (CoT) Prompting, have shown success in multi-step reasoning by asking LLMs to generate intermediate reasoning steps before providing a final answer. However, it wasn’t clear which types of examples make the best prompts. Complexity-Based Prompting shows that more complex examples, which require more reasoning steps, result in better model performance compared to simpler ones.

The method also applies to output selection: when multiple reasoning chains are generated, the model chooses the majority answer from the more complex reasoning chains. This process, called complexity-based consistency, further boosts accuracy.

  1. Chain-of-Thought (CoT) Prompting: CoT prompting improves multi-step reasoning by breaking down a problem into smaller steps, but complexity-based prompting goes further by selecting examples with more steps, improving overall accuracy.
  2. Self-Consistency: Self-Consistency selects the most common answer from multiple reasoning chains, while complexity-based consistency focuses on the most complex and robust reasoning paths, leading to more accurate results.
  3. Annotation Efficiency: Unlike retrieval-based methods that require large-scale annotated datasets, complexity-based prompting can be applied with only a Few-Shot learning setup, making it more efficient.

Benefits

  • Richness of reasoning: Complex prompts provide a more detailed view of problem-solving, making the model capable of handling a wide range of reasoning tasks.
  • Avoiding shortcuts: Complex reasoning chains prevent the model from relying on shortcuts, which can lead to mistakes, especially in tasks requiring detailed logic.
  • Better generalization: Using complex prompts improves generalization, meaning the model performs better not only on difficult tasks but also on simpler tasks.

How Complexity-Based Prompting Works

  1. Selecting Complex Prompts: In complexity-based prompting, examples with longer reasoning chains (more steps) are chosen as input prompts. The intuition is that complex examples provide richer reasoning patterns, covering a wider range of reasoning skills. These prompts teach the model to handle both simple and complex reasoning cases.

  2. Complexity-Based Consistency: When generating reasoning chains for a new problem, the model produces multiple possible solutions. Instead of selecting the majority answer from all generated chains (as in self-consistency), complexity-based consistency focuses on selecting the majority answer from the most complex reasoning chains. This ensures that the most thorough reasoning processes influence the final decision.

How to Use Complexity-Based Prompting

  1. Select Complex Prompts: When constructing a prompt, choose examples with a higher number of reasoning steps.
  2. Generate Multiple Outputs: Sample multiple reasoning paths from the model for each test question.
  3. Vote Among Complex Chains: Choose the final answer based on the majority from the most complex reasoning paths rather than all paths.
Tip

The code for Complexity-Based Prompting is open-sourced and available for further research and implementation at FranxYao/Complexity-Based-Prompting.

Results of Complexity-Based Prompting

This method was tested on multiple benchmarks and significantly outperformed existing prompting techniques like CoT and Self-Consistency, setting new state-of-the-art results.

TaskPrevious SOTAComplex Prompt (Codex)+ Voting Complex
GSM8K (Math)74.4%82.6%82.9%
MultiArith99.3%99.7%99.8%
MathQA37.4%47.3%60.0%
Date Understanding79.2%86.8%N/A
Penguins78.1%80.8%N/A
  • Substantial Improvement: Complexity-based prompting improves performance by an average of +5.3% and up to +18% on benchmarks like MathQA.
  • Efficient: Requires fewer examples to train while achieving better results.
  • Robust: Works consistently across various tasks, including math, temporal reasoning, and referential tasks.

Conclusion

Complexity-Based Prompting offers a straightforward yet highly effective method for improving multi-step reasoning in large language models. By selecting prompts and outputs based on the complexity of reasoning chains, this method significantly enhances accuracy across multiple benchmarks.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Fu, Y., Peng, H., Sabharwal, A., Clark, P., & Khot, T. (2023). Complexity-Based Prompting for Multi-step Reasoning. In The Eleventh International Conference on Learning Representations . https://openreview.net/forum?id=yf1icZHC-l9 ↩

Copyright Β© 2024 Learn Prompting.