Last updated on October 3, 2024
Technique | Institution | Date of Publication | Paper | Code |
---|---|---|---|---|
Complexity-Based Prompting | University of Edinburgh, Allen Institute for AI | Jan 2023 | Complexity-Based Prompting for Multi-Step Reasoning | FranxYao/Complexity-Based-Prompting |
Complexity-Based Prompting is a new technique for improving multi-step reasoning in Large Language Models (LLMs). The main idea is to use complex reasoning chains (those with more steps) as examples when prompting the model. This method improves performance in reasoning tasks like math problems and commonsense reasoning by focusing on both input prompts and output selection.
Standard prompting techniques, such as Chain-of-Thought (CoT) Prompting, have shown success in multi-step reasoning by asking LLMs to generate intermediate reasoning steps before providing a final answer. However, it wasnβt clear which types of examples make the best prompts. Complexity-Based Prompting shows that more complex examples, which require more reasoning steps, result in better model performance compared to simpler ones.
The method also applies to output selection: when multiple reasoning chains are generated, the model chooses the majority answer from the more complex reasoning chains. This process, called complexity-based consistency, further boosts accuracy.
Selecting Complex Prompts: In complexity-based prompting, examples with longer reasoning chains (more steps) are chosen as input prompts. The intuition is that complex examples provide richer reasoning patterns, covering a wider range of reasoning skills. These prompts teach the model to handle both simple and complex reasoning cases.
Complexity-Based Consistency: When generating reasoning chains for a new problem, the model produces multiple possible solutions. Instead of selecting the majority answer from all generated chains (as in self-consistency), complexity-based consistency focuses on selecting the majority answer from the most complex reasoning chains. This ensures that the most thorough reasoning processes influence the final decision.
The code for Complexity-Based Prompting is open-sourced and available for further research and implementation at FranxYao/Complexity-Based-Prompting.
This method was tested on multiple benchmarks and significantly outperformed existing prompting techniques like CoT and Self-Consistency, setting new state-of-the-art results.
Task | Previous SOTA | Complex Prompt (Codex) | + Voting Complex |
---|---|---|---|
GSM8K (Math) | 74.4% | 82.6% | 82.9% |
MultiArith | 99.3% | 99.7% | 99.8% |
MathQA | 37.4% | 47.3% | 60.0% |
Date Understanding | 79.2% | 86.8% | N/A |
Penguins | 78.1% | 80.8% | N/A |
Complexity-Based Prompting offers a straightforward yet highly effective method for improving multi-step reasoning in large language models. By selecting prompts and outputs based on the complexity of reasoning chains, this method significantly enhances accuracy across multiple benchmarks.
Fu, Y., Peng, H., Sabharwal, A., Clark, P., & Khot, T. (2023). Complexity-Based Prompting for Multi-step Reasoning. In The Eleventh International Conference on Learning Representations . https://openreview.net/forum?id=yf1icZHC-l9 β©