ECHO Enhances CoT Prompting: Self-Harmonized Chain of Thought (ECHO) improves traditional Chain-of-Thought (CoT) prompting by refining various reasoning paths into a cohesive approach, utilizing a dynamic self-harmonization process.
Process Overview: ECHO clusters similar questions, generates rationales using Zero-shot-CoT prompts, and iteratively unifies these demonstrations to create a consistent reasoning pattern, leading to improved accuracy in tasks like arithmetic and commonsense reasoning.
Performance Improvements: ECHO outperforms other prompting techniques, achieving higher accuracy in arithmetic, commonsense, and symbolic reasoning tasks, demonstrating its effectiveness in generating coherent and accurate responses.
How it Differs From Chain-of-Thought (CoT) Prompting?
Traditional Chain-of-Thought (CoT) prompting allows LLMs to break down complex problems into intermediate steps, either by using simple prompts like βLetβs think step by stepβ (Zero-Shot-CoT) or with human-crafted examples (Few-Shot-CoT). ECHO builds on this by improving how LLMs handle diverse solution paths, using an iterative process to harmonize these variations into a consistent and more accurate reasoning approach.
ECHO improves on traditional CoT methods by addressing two key limitations:
Diversity Issues in Auto-CoT: Auto-CoT clusters similar questions and generates reasoning paths, but sometimes these demonstrations can mislead the model if they are too similar or incorrect. ECHO mitigates this by refining multiple reasoning paths into a balanced and harmonized pattern.
Manual Effort in Few-Shot-CoT: Few-Shot-CoT requires human-crafted examples, which can be time-consuming. ECHO automates this process, reducing the reliance on manually created examples.
How it Works: Step-by-Step
ECHOβs key innovation is its dynamic, self-harmonization process, where demonstrations are continuously refined through multiple iterations. The method involves:
Question Clustering: Questions are clustered by similarity using a method like Sentence-BERT, which groups similar questions together.
Demonstration Sampling: For each cluster, a representative question is chosen, and a reasoning path is generated using Zero-Shot-CoT.
Demonstration Unification: Rationales for each demonstration are iteratively refined using the other demonstrations as examples. This process continues over multiple iterations until a unified reasoning pattern is established.
This harmonization reduces errors and aligns different reasoning paths into a coherent framework.
How to Use ECHO
ECHO can be applied to a wide range of reasoning tasks, including arithmetic, commonsense, and symbolic reasoning. Hereβs a simple template for how you might use it in an AI system:
Clustering questions based on similarity.
Generating rationales for each question using Zero-Shot-CoT prompts.
Prompt
[Question from step 1]
Let's think step by step.
Unifying demonstrations iteratively to optimize reasoning.
ECHO was tested on three major reasoning domains: arithmetic, commonsense, and symbolic reasoning. Below are the performance improvements ECHO achieved compared to other methods:
Method
Arithmetic
Commonsense
Symbolic
Overall
Zero-Shot-CoT
77.3%
61.4%
63.1%
71.3%
Few-Shot-CoT
82.1%
69.7%
88.5%
80.9%
Auto-CoT
80.8%
65.7%
87.8%
79.2%
ECHO
83.1%
70.5%
90.3%
82.0%
ECHO demonstrates the best overall performance, especially in symbolic reasoning, where it outperforms all other methods. Its harmonized approach makes it more effective in generating consistent and correct reasoning across various problem types.
Sander Schulhoff
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.