Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedThought Generation🟦 Analogical Prompting

🟦 Analogical Prompting

🟦 This article is rated medium
Reading Time: 6 minutes

Last updated on October 3, 2024

Overview of Analogical Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaper
Analogical PromptingGoogle DeepMind, Stanford UniversityOct 2023Large Language Models as Analogical Reasoners

What is Analogical Prompting?

Analogical Prompting is a new approach that enhances the reasoning process of Large Language Models (LLMs) by drawing inspiration from human analogical reasoning. Inspired by our ability to draw connections between past experiences and current challenges, analogical prompting encourages LLMs to self-generate relevant examples before tackling new problems.

For example, if you're asked to solve a complex math problem, you might think of similar problems you’ve solved before and apply that knowledge. Similarly, analogical prompting instructs LLMs to recall or generate examples and solutions that resemble the current task. The LLM then uses these examples to better solve the original problem.

How Does Analogical Prompting Work?

  1. Problem Statement: The LLM is presented with a problem.
  2. Generate Exemplars: It generates relevant problems and their solutions as exemplars.
  3. Solve Original Problem: The LLM uses these exemplars to solve the initial problem.

This method not only removes the need for manually labeled examples but also tailors the exemplars to the specific problem, making it more adaptive.

How Does Analogical Prompting Differ from Existing Techniques?

Analogical Prompting builds upon and improves earlier prompting techniques such as:

  • Zero-Shot Chain-of-Thought (CoT): LLMs are guided to "think step by step," but this guidance may be too generic for complex tasks. Analogical prompting generates demonstrations and does this automatically.
  • Chain-of-Thought (CoT): LLMs are given labeled examples to mimic reasoning processes, but acquiring these labeled examples is labor-intensive. The generated examples through Analogical Prompting are customized for each problem, which improves adaptability compared to the fixed exemplars in Few-Shot CoT.

How to Use Analogical Prompting

Analogical Prompting can be implemented with a specific instruction template in the LLM's input. Here's a general template you can follow:

Astronaut

Template


[Insert problem here]

Instruction: Relevant problems: Solve the initial problem:

Astronaut

Sample Prompt


Your task is to tackle mathematical problems. When presented with a math problem, recall relevant problems as examples. Afterward, proceed to solve the initial problem.

[Insert problem here]

Instruction: Relevant problems: Recall examples of math problems that are relevant to the initial problem. Your problems should be distinct from each other and from the initial problem (e.g., involving different numbers and names).

For each problem: After "Q: ", describe the problem After "A: ", explain the solution.

Solve the initial problem:

Example:

If the problem is to calculate the area of a square given its vertices, the model first generates a similar problem with a known side length, solves it, and then applies this reasoning to the original problem.

Astronaut

Prompt


Q: What is the area of the square with the four vertices at (-2, 2), (2, -2), (-2, -6), and (-6, -2)?

Recall relevant problems: Solve the initial problem:

Robot

AI Output


Relevant exemplars:

Q: What is the area of the square with a side length of 5? A: The area of a square is found by squaring the length of its side. So, the area of this square is 5^2 = 25. …

Solve the initial problem:

To find the area of the square, we need to find the side length. The length is … \sqrt-8 = \sqrt32. So, the area of the square is (\sqrt32)^2 = 32.

Results of Analogical Prompting

Analogical prompting has been tested across various tasks including math problem-solving and code generation. The results show that it consistently outperforms both Zero-Shot and Few-Shot CoT methods.

Performance Comparison on GSM8K (Math Dataset)

MethodAccuracy (GSM8K)
Zero-Shot75.0%
Few-Shot CoT76.7%
Analogical Prompting77.8%

Analogical prompting particularly shines in complex tasks that require reasoning across multiple steps, such as solving competitive programming challenges or advanced math problems.

Conclusion

Analogical prompting allows LLMs to generate their own reasoning examples tailored to each problem, offering a flexible and powerful method to guide reasoning without the need for labeled data. This approach improves performance on reasoning tasks and opens new possibilities for solving more complex problems where fixed examples are impractical or unavailable.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Yasunaga, M., Chen, X., Li, Y., Pasupat, P., Leskovec, J., Liang, P., Chi, E. H., & Zhou, D. (2024). Large Language Models as Analogical Reasoners. https://arxiv.org/abs/2310.01714 ↩

Copyright Β© 2024 Learn Prompting.