Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedDecomposition🟒 Introduction

🟒 Introduction to Decomposition Prompting Techniques

Reading Time: 6 minutes

Last updated on September 27, 2024

Welcome to the decomposition section of the advanced Prompt Engineering Guide.

In this section, you'll explore advanced prompting techniques that break down complex problems into simpler, manageable sub-tasks. This decomposition approach, inspired by human problem-solving strategies, enhances the performance of GenAI models. While these techniques build on the principles of Chain-of-Thought (CoT) Prompting, they go a step further by explicitly decomposing tasks, significantly improving the problem-solving abilities of Large Language Models (LLMs).

Here's the list of techniques we'll explore:

  1. Decomposed (DecomP) Prompting: Breaks complex tasks into sub-tasks that LLMs or handlers can solve, improving accuracy and efficiency.

  2. Plan-and-Solve (PS) Prompting: Introduces a planning phase before problem-solving, reducing errors in reasoning steps.

  3. Program of Thoughts Prompting: Separates reasoning from computation, enabling models to express solutions as executable code for better accuracy.

  4. Faithful Chain-of-Thought Reasoning: Ensures the reasoning chain directly leads to the final answer, increasing trust and clarity.

  5. Skeleton-of-Thought Prompting: Creates a basic outline first, expanding responses in parallel for faster, more accurate answers.

  6. Tree of Thoughts (ToT) Prompting: Allows exploration of multiple reasoning paths, enabling backtracking and more flexible problem-solving.

  7. Recursion of Thought Prompting: Splits complex tasks into smaller sub-tasks to address context length limits in LLMs.

Why Improving Chain-of-Thought (CoT) Prompting?

Chain-of-Thought (CoT) Prompting was one of the first techniques to apply reasoning steps to LLMs. It guided the LLM to unfold its reasoning leading to more accurate and interpretable results.

Zero-Shot Chain-of-Thought (CoT) Prompting expanded on this idea by introducing the simple phrase, "Let's think step-by-step," which led to significant performance improvements in multi-step reasoning tasks such as symbolic reasoning, math problems, and logic puzzles.

However, this method has a few common issues:

  • Calculation errors
  • Missing steps
  • Semantic misunderstandings

The decomposition techniques discussed in this section address these issues from different angles. Let’s briefly explore each of them.

Decomposed (DECOMP) Prompting

Decomposed Prompting (DECOMP) breaks down complex tasks into simpler sub-tasks and assigns them to LLMs or handlers better suited to solve them. These handlers can:

  • further decompose the task,
  • solve it using a simple prompt, or
  • use a function to complete the sub-task.

On the Decomposition page, you'll learn more about how Decomposed Prompting works and specific examples

Plan-and-Solve (PS) Prompting

Plan-and-Solve (PS) prompting enhances reasoning by addressing missing step errors in Zero-Shot CoT prompting. This method introduces an intermediate planning phase, where the model devises a plan before solving the problem step-by-step. This addition of a planning phase improves the model's ability to avoid skipping critical reasoning steps.

On the Plan-and-Solve page, you'll learn how to use Plan-and-Solve Prompting, what is Plan-and-Solve Plus Prompting and how it reduces calculation errors.

Program-of-Thoughts (PoT) Prompting

In contrast to CoT prompting, Program-of-Thoughts (PoT) Prompting separates reasoning from computation. Instead of solving math or logic problems directly, PoT allows the model to express its reasoning as a program (e.g., Python), which is then executed by an interpreter for a more accurate solution.

LLMs are prone to computational errors, struggle with complex math, and are inefficient with iterative processes, which makes PoT a valuable approach for such tasks.

On the Program-of-Thoughts page, you'll learn more about how PoT works.

Faithful Chain-of-Thought (CoT) Reasoning

Faithful Chain-of-Thought (CoT) addresses the issue that LLM doesn’t always reflect the true process behind the final answer when they are prompted with Chain-of-Thought (CoT). Faithful CoT ensures that the final answer is derived directly from the reasoning chain, increasing trust and interpretability.

On the Faithful Chain-of-Thought page, you'll learn how Faithful Chain-of-Thought (CoT) works and how to use it with specific examples.

Skeleton-of-Thought (SoT) Prompting

Skeleton-of-Thought (SoT) prompting improves response efficiency by generating a basic outline or "skeleton" of the answer first, and then expanding it in parallel. This two-stage approach reduces latency and improves the quality of the final answer.

On the Skeleton-of-Thought page, you'll learn what is a skeleton in SoT, what are the two stages of SoT and how it delivers speed improvement.

Tree of Thoughts (ToT) Prompting

Tree of Thoughts (ToT) prompting allows LLM to explore multiple reasoning paths in a structured, tree-like manner. This method mimics human problem-solving by allowing models to propose various solutions and then evaluate which path is likely to yield the best result. The tree structure allows for backtracking when a path seems unproductive, making the model more flexible and capable of correcting its course.

On the Tree of Thoughts, you'll learn how to leverage the Tree of Thoughts (ToT) technique and how it excels in creative problem-solving tasks like math reasoning, writing, and puzzles.

Recursion of Thought (RoT) Prompting

Recursion of Thought (RoT) prompting addresses context length limitations by breaking down complex tasks into smaller sub-problems, each processed within separate contexts. This divide-and-conquer method allows LLMs to handle tasks that exceed their context window limits.

On the Recursion of Thought page, you'll learn how to apply RoT and how it is particularly effective for large-scale tasks like multi-digit arithmetic and complex sequences.

Conclusion and Next Steps

As you’ve seen, these advanced prompting techniques offer various ways to enhance the problem-solving capabilities of LLMs, each addressing specific challenges like reasoning, accuracy, and interpretability. By understanding and applying these decomposition methods, you can unlock the potential of language models in complex scenarios.

Feel free to explore the links provided for a deeper dive into each technique, and start experimenting with them in your own projects. Happy prompting!

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

🟦 Decomposed Prompting

β—† Faithful Chain-of-Thought

🟦 Plan-and-Solve Prompting

🟦 Program of Thoughts

β—† Recursion of Thought

β—† Skeleton-of-Thought

🟦 Tree of Thoughts

Footnotes

  1. Patel, P., Mishra, S., Parmar, M., & Baral, C. (2022). Is a Question Decomposition Unit All We Need? https://arxiv.org/abs/2205.12538 ↩

  2. Jason Wei. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. ↩

  3. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. ↩

  4. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. ↩

  5. Tushar Khot. (2023). Decomposed Prompting: A Modular Approach for Solving Complex Tasks. ↩

  6. Lei Wang. (2023). Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models. ↩

  7. Wenhu Chen. (2022). Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. ↩

  8. Qing Lyu. (2023). Faithful Chain-of-Thought Reasoning. ↩

  9. Xuefei Ning. (2023). Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation. ↩

  10. Shunyu Yao. (2023). Tree of Thoughts: Deliberate Problem Solving with Large Language Models. ↩

  11. Soochan Lee, G. K. (2023). Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models. ↩

Copyright Β© 2024 Learn Prompting.