Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedThought Generation🟦 Memory-of-Thought (MoT)

🟦 Memory-of-Thought (MoT) Prompting

🟦 This article is rated medium
Reading Time: 5 minutes

Last updated on October 3, 2024

Overview of Memory-of-Thought (MoT) Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaperCode
Memory-of-Thought (MoT) PromptingFudan UniversityMay 2023MoT: Memory-of-Thought Enables ChatGPT to Self-ImproveLeeSureman/MoT

What is Memory-of-Thought (MoT)?

Memory-of-Thought (MoT) is a novel framework designed to let Large Language Models (LLMs) like ChatGPT self-improve without requiring high-quality labeled datasets or computationally expensive fine-tuning. Inspired by human self-reflection and memory, MoT equips LLMs with the ability to pre-think, store, and recall past reasoning paths, enhancing their performance on various reasoning tasks.

How MoT Differs from Existing Techniques

This approach contrasts traditional methods that rely heavily on annotated datasets and fine-tuning, both of which are costly and limit the accessibility of improvement for LLMs. MoT leverages external memory to improve performance across various reasoning tasks, including arithmetic, commonsense, and factual reasoning.

How Does MoT Work?

The framework operates in two key stages:

  1. Pre-thinking: Before the test stage, the LLM thinks over an unlabeled dataset and saves the high-confidence reasoning paths (called thoughts) in an external memory system.

  2. Recalling: During the test stage, when the LLM encounters a new question, it retrieves the relevant thoughts from memory to aid its reasoning process.

This method allows the LLM to improve its reasoning capabilities without updating its parameters, making it more efficient and scalable.

Why It Works

MoT mimics human cognition by allowing the LLM to think, store, and recall. Just as humans remember past decisions to make better future ones, the LLM can rely on stored reasoning chains to enhance its current reasoning. This improves performance across a wide range of tasks by eliminating reliance on random or irrelevant examples, focusing instead on high-quality, relevant memories.

How to Use MoT

Step 1. Pre-thinking

In this stage, the LLM processes unlabeled examples and saves the most consistent reasoning paths (thoughts) as memory. This process involves the following steps:

  • The LLM generates multiple reasoning paths for each question.
  • A majority-vote system selects the most frequent (and thus consistent) answer and saves the corresponding reasoning chain as memory.
Astronaut

Prompt


Q1: [Question 1] A1: [Answer 1] Q2: [Question 1] A2: [Answer 1]

Qn: [Sample question]

Step 2. Recalling

When the LLM encounters a new test question:

  • It retrieves relevant thoughts from memory, based on the similarity between the current question and stored questions.
  • The LLM uses its own understanding to select the most useful thought and then uses it to aid in answering the test question.

Example of MoT in Action:

Test Question: Maddie has 24 apples. If she gives 12 to Mike, how many does she have left?

Memory Retrieval: A similar thought retrieved from memory involves someone giving away apples:
- "If Tom has 30 apples and gives 15 away, he has 30 - 15 = 15 apples left."

Answer: Using the thought from memory, Maddie has 24 - 12 = 12 apples left.

By recalling a similar scenario from its memory, the LLM quickly resolves the new question using an analogous reasoning process.

Tip

The code for MoT is open-sourced by Fudan University and available for further research and implementation at LeeSureman/MoT.

Results of MoT

MoT was tested on multiple reasoning benchmarks, demonstrating significant improvements in various tasks compared to standard techniques.

TaskFew-Shot CoTMoTImprovement
Arithmetic Reasoning49.7%54.1%+4.4%
Commonsense Reasoning80.0%82.3%+2.3%
Natural Language Inference67.7%71.5%+3.8%
Factual Reasoning65.2%68.0%+2.8%

Conclusion

Memory-of-Thought (MoT) enhances LLMs' reasoning capabilities by enabling them to learn from their own past experiences and leverage stored memories. MoT offers a cost-effective and efficient solution compared to traditional methods relying on extensive datasets and fine-tuning.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Li, X., & Qiu, X. (2023). MoT: Memory-of-Thought Enables ChatGPT to Self-Improve. https://arxiv.org/abs/2305.05181 ↩

Copyright Β© 2024 Learn Prompting.