Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🧠 Advanced

🟦 Self-Harmonized Chain of Thought (ECHO)

Last updated on October 1, 2024 by Valeriia Kuka

What is Self-Harmonized Chain of Thought (ECHO)?

Self-Harmonized Chain of Thought (ECHO)1 is an advanced technique that enhances Chain-of-Thought (CoT) prompting in large language models (LLMs) by refining multiple reasoning paths into a unified pattern.

How it Differs From Chain-of-Thought (CoT) Prompting?

Traditional Chain of Thought (CoT) prompting allows LLMs to break down complex problems into intermediate steps, either by using simple prompts like “Let’s think step by step” (Zero-shot-CoT) or with human-crafted examples (Few-shot-CoT). ECHO builds on this by improving how LLMs handle diverse solution paths, using an iterative process to harmonize these variations into a consistent and more accurate reasoning approach.

ECHO improves on traditional CoT methods by addressing two key limitations:

  • Diversity Issues in Auto-CoT: Auto-CoT clusters similar questions and generates reasoning paths, but sometimes these demonstrations can mislead the model if they are too similar or incorrect. ECHO mitigates this by refining multiple reasoning paths into a balanced and harmonized pattern.
  • Manual Effort in Few-shot-CoT: Few-shot-CoT requires human-crafted examples, which can be time-consuming. ECHO automates this process, reducing the reliance on manually created examples.

How it Works: Step-by-Step

ECHO’s key innovation is its dynamic, self-harmonization process, where demonstrations are continuously refined through multiple iterations. The method involves:

  • Question Clustering: Questions are clustered by similarity using a method like Sentence-BERT, which groups similar questions together.
  • Demonstration Sampling: For each cluster, a representative question is chosen, and a reasoning path is generated using Zero-shot-CoT.
  • Demonstration Unification: Rationales for each demonstration are iteratively refined using the other demonstrations as examples. This process continues over multiple iterations until a unified reasoning pattern is established.

This harmonization reduces errors and aligns different reasoning paths into a coherent framework.

How to Use ECHO

ECHO can be applied to a wide range of reasoning tasks, including arithmetic, commonsense, and symbolic reasoning. Here’s a simple template for how you might use it in an AI system:

  1. Clustering questions based on similarity.

  2. Generating rationales for each question using Zero-shot-CoT prompts.

Astronaut

Prompt


[Question from step 1]

Let's think step by step.
  1. Unifying demonstrations iteratively to optimize reasoning.
Tip

For open-source code, check this link.

Results of the ECHO Technique

ECHO was tested on three major reasoning domains: arithmetic, commonsense, and symbolic reasoning. Below are the performance improvements ECHO achieved compared to other methods:

MethodArithmeticCommonsenseSymbolicOverall
Zero-Shot-CoT77.3%61.4%63.1%71.3%
Few-Shot-CoT82.1%69.7%88.5%80.9%
Auto-CoT80.8%65.7%87.8%79.2%
ECHO83.1%70.5%90.3%82.0%

ECHO demonstrates the best overall performance, especially in symbolic reasoning, where it outperforms all other methods. Its harmonized approach makes it more effective in generating consistent and correct reasoning across various problem types.

Footnotes

  1. Jin, Z., & Lu, W. (2024). Self-Harmonized Chain of Thought. https://arxiv.org/abs/2409.04057

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.