Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🧠 Advanced
🧠 AdvancedDecomposition🟢 Introduction

🟢 Introduction to Decomposition Prompting Techniques

Last updated on September 27, 2024 by Valeriia Kuka

Welcome to the decomposition section of the advanced Prompt Engineering Guide.

In this section, you'll explore advanced prompting techniques that break down complex problems into simpler, manageable sub-tasks. This decomposition approach, inspired by human problem-solving strategies, enhances the performance of generative AI models1. While these techniques build on the principles of Chain-of-Thought (CoT) prompting2, they go a step further by explicitly decomposing tasks, significantly improving the problem-solving abilities of large language models (LLMs).

Here's the list of techniques we'll explore:

  1. Decomposed (DecomP) Prompting: Breaks complex tasks into sub-tasks that LLMs or handlers can solve, improving accuracy and efficiency.

  2. Plan-and-Solve (PS) Prompting: Introduces a planning phase before problem-solving, reducing errors in reasoning steps.

  3. Program of Thoughts Prompting: Separates reasoning from computation, enabling models to express solutions as executable code for better accuracy.

  4. Faithful Chain-of-Thought Reasoning: Ensures the reasoning chain directly leads to the final answer, increasing trust and clarity.

  5. Skeleton-of-Thought Prompting: Creates a basic outline first, expanding responses in parallel for faster, more accurate answers.

  6. Tree of Thoughts (ToT) Prompting: Allows exploration of multiple reasoning paths, enabling backtracking and more flexible problem-solving.

  7. Recursion of Thought Prompting: Splits complex tasks into smaller sub-tasks to address context length limits in LLMs.

Why Improving Chain-of-Thought (CoT) Prompting?

Chain-of-Thought (CoT) prompting was one of the first techniques to apply reasoning steps to LLMs3. It guided the LLM to unfold its reasoning leading to more accurate and interpretable results.

Zero-Shot Chain-of-Thought (CoT) prompting expanded on this idea by introducing the simple phrase, "Let's think step-by-step," which led to significant performance improvements in multi-step reasoning tasks such as symbolic reasoning, math problems, and logic puzzles4.

However, this method has a few common issues:

  • Calculation errors
  • Missing steps
  • Semantic misunderstandings

The decomposition techniques discussed in this section address these issues from different angles. Let’s briefly explore each of them.

Decomposed (DECOMP) Prompting

Decomposed Prompting (DECOMP)5 breaks down complex tasks into simpler sub-tasks and assigns them to LLMs or handlers better suited to solve them. These handlers can:

  • further decompose the task,
  • solve it using a simple prompt, or
  • use a function to complete the sub-task.

On this page, you'll learn more about how Decomposed Prompting works and specific examples

Plan-and-Solve (PS) Prompting

Plan-and-Solve (PS) prompting6 enhances reasoning by addressing missing step errors in zero-shot CoT prompting. This method introduces an intermediate planning phase, where the model devises a plan before solving the problem step-by-step. This addition of a planning phase improves the model's ability to avoid skipping critical reasoning steps.

On this page, you'll learn how to use Plan-and-Solve Prompting, what is Plan-and-Solve Plus Prompting and how it reduces calculation errors.

Program-of-Thoughts (PoT) Prompting

In contrast to CoT prompting, Program-of-Thoughts (PoT) prompting7 separates reasoning from computation. Instead of solving math or logic problems directly, PoT allows the model to express its reasoning as a program (e.g., Python), which is then executed by an interpreter for a more accurate solution.

LLMs are prone to computational errors, struggle with complex math, and are inefficient with iterative processes, which makes PoT a valuable approach for such tasks.

On this page, you'll learn more about how PoT works.

Faithful Chain-of-Thought (CoT) Reasoning

Faithful Chain-of-Thought (CoT)8 addresses the issue that LLM doesn’t always reflect the true process behind the final answer when they are prompted with Chain-of-Thought (CoT). Faithful CoT ensures that the final answer is derived directly from the reasoning chain, increasing trust and interpretability.

On this page, you'll learn how Faithful Chain-of-Thought (CoT) works and how to use it with specific examples.

Skeleton-of-Thought (SoT) Prompting

Skeleton-of-Thought (SoT) prompting9 improves response efficiency by generating a basic outline or "skeleton" of the answer first, and then expanding it in parallel. This two-stage approach reduces latency and improves the quality of the final answer.

On this page, you'll learn what is a skeleton in SoT, what are the two stages of SoT and how it delivers speed improvement.

Tree of Thoughts (ToT) Prompting

Tree of Thoughts (ToT) prompting10 allows LLM to explore multiple reasoning paths in a structured, tree-like manner. This method mimics human problem-solving by allowing models to propose various solutions and then evaluate which path is likely to yield the best result. The tree structure allows for backtracking when a path seems unproductive, making the model more flexible and capable of correcting its course.

On this page, you'll learn how to leverage the Tree of Thoughts (ToT) technique and how it excels in creative problem-solving tasks like math reasoning, writing, and puzzles.

Recursion of Thought (RoT) Prompting

Recursion of Thought (RoT) prompting11 addresses context length limitations by breaking down complex tasks into smaller sub-problems, each processed within separate contexts. This divide-and-conquer method allows LLMs to handle tasks that exceed their context window limits.

On this page, you'll learn how to apply RoT and how it is particularly effective for large-scale tasks like multi-digit arithmetic and complex sequences.

Conclusion and Next Steps

As you’ve seen, these advanced prompting techniques offer various ways to enhance the problem-solving capabilities of LLMs, each addressing specific challenges like reasoning, accuracy, and interpretability. By understanding and applying these decomposition methods, you can unlock the bigger potential of language models in complex scenarios.

Feel free to explore the links provided for a deeper dive into each technique, and start experimenting with them in your own projects. Happy prompting!

Footnotes

  1. Patel, P., Mishra, S., Parmar, M., & Baral, C. (2022). Is a Question Decomposition Unit All We Need? https://arxiv.org/abs/2205.12538

  2. Jason Wei. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.

  3. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models.

  4. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners.

  5. Tushar Khot. (2023). Decomposed Prompting: A Modular Approach for Solving Complex Tasks.

  6. Lei Wang. (2023). Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models.

  7. Wenhu Chen. (2022). Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.

  8. Qing Lyu. (2023). Faithful Chain-of-Thought Reasoning.

  9. Xuefei Ning. (2023). Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation.

  10. Shunyu Yao. (2023). Tree of Thoughts: Deliberate Problem Solving with Large Language Models.

  11. Soochan Lee, G. K. (2023). Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models.

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.