Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedThought Generation🟦 Automatic Chain of Thought (Auto-CoT)

Automatic Chain of Thought (Auto-CoT)

🟦 This article is rated medium
Reading Time: 5 minutes

Last updated on October 3, 2024

Overview of Automatic Chain-of-Thought (Auto-CoT) Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaperCode
Automatic Chain-of-Thought (Auto-CoT) PromptingAmazon ScienceOct 2022Automatic Chain-of-Thought Prompting in Large Language Modelsamazon-science/auto-cot

What is Auto-CoT?

Automatic Chain-of-Thought (Auto-CoT) is a prompting technique designed to enhance the reasoning capabilities of Large Language Models (LLMs). It does this by automatically generating intermediate reasoning steps, a key element of Chain-of-Thought (CoT) prompting.

CoT involves manually creating task-specific demonstrations, where each demonstration includes a question, intermediate reasoning steps, and the final answer. While CoT generally performs better, it's time-consuming and requires hand-crafted examples for each task. Auto-CoT addresses this by leveraging LLMs to automatically generate reasoning demonstrations.

Note

Don't confuse Auto-CoT with Zero-Shot CoT. While Auto-CoT uses a procedure to generate reasoning chains for CoT prompting, Zero-Shot CoT provides no additional demonstrations and relies solely on the "Let's think step by step" prompt.

How Auto-CoT Differs from Existing Techniques

  1. Auto-CoT vs. CoT: Unlike CoT, which relies on manually created demonstrations, Auto-CoT uses LLMs to generate them automatically, eliminating the need for human effort in designing task-specific examples.

  2. Auto-CoT vs. Zero-Shot CoT: Zero-Shot CoT simply encourages reasoning but lacks the structure and diversity of curated demonstrations, leading to errors. Auto-CoT addresses this by automatically generating diverse and structured demonstrations, reducing the likelihood of reasoning mistakes.

How Auto-CoT Works

Auto-CoT generates reasoning chains for CoT demonstrations in two key stages:

  1. Question Clustering: The system clusters the questions into groups using Sentence-BERT embeddings. Each group contains semantically similar questions.
  2. Demonstration Sampling: It selects a representative question from each cluster and generates a reasoning chain using Zero-Shot CoT. This creates diverse and effective demonstrations without manual intervention.

These automatically generated demonstrations are used for in-context learning, where the LLM uses the reasoning chains to solve new tasks step by step.

How to Use Auto-CoT

Auto-CoT involves generating CoT demonstrations without manual effort. Here’s how you can implement it:

Step 1. Clustering Questions

Auto-CoT uses Sentence-BERT to embed and cluster questions based on semantic similarity. The goal is to ensure the selected demonstrations cover a diverse range of reasoning patterns.

Step 2. Generating Reasoning Chains

Once clusters are formed, Auto-CoT selects representative questions from each cluster and uses Zero-Shot CoT to generate reasoning chains for each. These chains are then used as demonstrations for the LLM to solve new tasks.

This process enables the LLM to reason step by step without human-designed demonstrations.

Tip

The code for Auto-CoT is open-sourced by Amazon Science and available for further research and implementation at amazon-science/auto-cot.

Results of Auto-CoT

Auto-CoT was tested on ten public benchmark datasets across arithmetic, commonsense, and symbolic reasoning tasks. The results demonstrate that Auto-CoT matches or exceeds the performance of manually crafted CoT demonstrations.

TaskZero-Shot CoTCoTAuto-CoT
Arithmetic Reasoning78.7%91.7%92.0%
Commonsense Reasoning64.6%73.5%74.4%
Symbolic Reasoning57.6%59.0%59.7%

Conclusion

Auto-CoT is a powerful and scalable way to generate CoT demonstrations automatically without manual effort. It consistently matches or surpasses Chain-of-Thought, making it a highly effective approach for improving LLM reasoning capabilities across diverse tasks. The code is open-sourced and available for further research and implementation.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Zhang, Z., Zhang, A., Li, M., & Smola, A. (2023). Automatic Chain of Thought Prompting in Large Language Models. In The Eleventh International Conference on Learning Representations . https://openreview.net/forum?id=5NTt8GFjUHkr ↩

Copyright Β© 2024 Learn Prompting.