Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🧠 Advanced
🧠 AdvancedThought Generation🟦 Analogical Prompting

🟦 Analogical Prompting

Last updated on October 3, 2024 by Valeriia Kuka
Overview of Analogical Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaper
Analogical PromptingGoogle DeepMind, Stanford UniversityOct 2023Large Language Models as Analogical Reasoners

What is Analogical Prompting?

Analogical Prompting1 is a new approach that enhances the reasoning process of large language models (LLMs) by drawing inspiration from human analogical reasoning. Inspired by our ability to draw connections between past experiences and current challenges, analogical prompting encourages LLMs to self-generate relevant examples before tackling new problems.

For example, if you're asked to solve a complex math problem, you might think of similar problems you’ve solved before and apply that knowledge. Similarly, analogical prompting instructs LLMs to recall or generate examples and solutions that resemble the current task. The LLM then uses these examples to better solve the original problem.

How Does Analogical Prompting Work?

  1. Problem Statement: The LLM is presented with a problem.
  2. Generate Exemplars: It generates relevant problems and their solutions as exemplars.
  3. Solve Original Problem: The LLM uses these exemplars to solve the initial problem.

This method not only removes the need for manually labeled examples but also tailors the exemplars to the specific problem, making it more adaptive.

How Does Analogical Prompting Differ from Existing Techniques?

Analogical Prompting builds upon and improves earlier prompting techniques such as:

  • Zero-Shot Chain-of-Thought (CoT): LLMs are guided to "think step by step," but this guidance may be too generic for complex tasks. Analogical prompting generates demonstrations and does this automatically.
  • Chain-of-Thought (CoT): LLMs are given labeled examples to mimic reasoning processes, but acquiring these labeled examples is labor-intensive. The generated examples through Analogical Prompting are customized for each problem, which improves adaptability compared to the fixed exemplars in few-shot CoT.

How to Use Analogical Prompting

Analogical Prompting can be implemented with a specific instruction template in the LLM's input. Here's a general template you can follow:

Astronaut

Template


[Insert problem here]

Instruction: Relevant problems: Solve the initial problem:
Astronaut

Sample Prompt


Your task is to tackle mathematical problems. When presented with a math problem, recall relevant problems as examples. Afterward, proceed to solve the initial problem.

[Insert problem here]

Instruction: Relevant problems: Recall examples of math problems that are relevant to the initial problem. Your problems should be distinct from each other and from the initial problem (e.g., involving different numbers and names).

For each problem: After "Q: ", describe the problem After "A: ", explain the solution.

Solve the initial problem:

Example:

If the problem is to calculate the area of a square given its vertices, the model first generates a similar problem with a known side length, solves it, and then applies this reasoning to the original problem.

Astronaut

Prompt


Q: What is the area of the square with the four vertices at
(-2, 2), (2, -2), (-2, -6), and (-6, -2)?

Recall relevant problems: Solve the initial problem:
Robot

AI Output


Relevant exemplars:

Q: What is the area of the square with a side length of 5? A: The area of a square is found by squaring the length of its side. So, the area of this square is 5^2 = 25. …

Solve the initial problem:

To find the area of the square, we need to find the side length. The length is … \sqrt-8 = \sqrt32. So, the area of the square is (\sqrt32)^2 = 32.

Results of Analogical Prompting

Analogical prompting has been tested across various tasks including math problem-solving and code generation. The results show that it consistently outperforms both zero-shot and few-shot CoT methods.

Performance Comparison on GSM8K (Math Dataset)

MethodAccuracy (GSM8K)
Zero-shot75.0%
Few-shot CoT76.7%
Analogical Prompting77.8%

Analogical prompting particularly shines in complex tasks that require reasoning across multiple steps, such as solving competitive programming challenges or advanced math problems.

Conclusion

Analogical prompting allows LLMs to generate their own reasoning examples tailored to each problem, offering a flexible and powerful method to guide reasoning without the need for labeled data. This approach improves performance on reasoning tasks and opens new possibilities for solving more complex problems where fixed examples are impractical or unavailable.

Footnotes

  1. Yasunaga, M., Chen, X., Li, Y., Pasupat, P., Leskovec, J., Liang, P., Chi, E. H., & Zhou, D. (2024). Large Language Models as Analogical Reasoners. https://arxiv.org/abs/2310.01714

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.