πŸ˜ƒ Basics
🧠 Advanced
πŸ”“ Prompt Hacking
🧠 AdvancedDecomposition🟒 Introduction

🟒 Introduction to Decomposition Prompting Techniques

Reading Time: 4 minutes
Last updated on September 27, 2024

Valeriia Kuka

Decomposition prompting

Welcome to the decomposition section of the advanced Prompt Engineering Guide.

Quick Navigation

CategoryTechniquesKey Benefits
Core Conceptsβ€’ What is Decomposition Prompting?
β€’ Evolution from Chain-of-Thought
β€’ Foundation of decomposition
β€’ Historical context
Logical Reasoningβ€’ Chain-of-Logic (CoL)
β€’ Faithful Chain-of-Thought
β€’ Rule-based reasoning
β€’ Logical relationships
β€’ Interpretable decisions
Task Breakdownβ€’ Decomposed (DecomP)
β€’ Plan-and-Solve (PS)
β€’ Complex task management
β€’ Systematic planning
Computation & Codeβ€’ Program of Thoughts (PoT)
β€’ Chain-of-Code (CoC)
β€’ Precise calculations
β€’ Code generation
Structure & Organizationβ€’ Skeleton-of-Thought (SoT)
β€’ Tree of Thoughts (ToT)
β€’ Recursion of Thought (RoT)
β€’ Parallel processing
β€’ Multiple solution paths
β€’ Handling large tasks

Decomposition prompting is a powerful approach that breaks down complex problems into simpler, more manageable sub-tasks. This technique is inspired by a fundamental human problem-solving strategy and has shown remarkable success in enhancing AI performance without requiring larger models.

Consider this math word problem: "If John has 15 apples and gives away 1/3 of them to Mary, who then shares half of her apples with Tom, how many apples does Tom have?"

This can be decomposed into simpler questions:

  1. "How many apples did Mary receive?" (1/3 of 15)
  2. "How many apples did Mary give to Tom?" (half of Mary's apples)

In this section of our guide, you'll explore advanced prompting techniques that build upon this decomposition approach. While these techniques build on the principles of Chain-of-Thought (CoT) Prompting1Jason Wei. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. , they go a step further by explicitly decomposing tasks, significantly improving the problem-solving abilities of Large Language Models (LLMs).

Evolution from Chain-of-Thought

Chain-of-Thought (CoT) Prompting2Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. was one of the first techniques to apply reasoning steps to LLMs.

Zero-Shot Chain-of-Thought (CoT) Prompting expanded on this idea by introducing the simple phrase, "Let's think step-by-step," which led to significant performance improvements in multi-step reasoning tasks such as symbolic reasoning, math problems, and logic puzzles3Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. .

While powerful, these techniques had limitations:

  • Calculation errors
  • Missing steps
  • Semantic misunderstandings

Decomposition techniques build upon these foundations while addressing their limitations through more structured approaches.

Advanced Decomposition Techniques

Chain-of-Logic

Chain-of-Logic (CoL)4Servantez, S., Barrow, J., Hammond, K., & Jain, R. (2024). Chain of Logic: Rule-Based Reasoning with Large Language Models. https://arxiv.org/abs/2402.10400 is a structured prompting technique designed specifically for complex rule-based reasoning tasks. Unlike other decomposition methods, CoL focuses on the logical relationships between components, making it particularly effective for legal reasoning and other rule-based decision-making processes.

Decomposed (DECOMP) Prompting

Decomposed Prompting (DECOMP)5Tushar Khot. (2023). Decomposed Prompting: A Modular Approach for Solving Complex Tasks. is the foundational technique that breaks down complex tasks into simpler sub-tasks and assigns them to appropriate handlers.

These handlers can:

  • further decompose the task
  • solve it using a simple prompt
  • use a function to complete the sub-task

Plan-and-Solve (PS) Prompting

Plan-and-Solve (PS) prompting6Lei Wang. (2023). Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models. enhances reasoning by addressing missing step errors in Zero-Shot CoT prompting. This method introduces an intermediate planning phase before problem-solving, improving the model's ability to avoid skipping critical reasoning steps.

Program-of-Thoughts (PoT) Prompting

Program-of-Thoughts (PoT) Prompting7Wenhu Chen. (2022). Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. separates reasoning from computation. Instead of solving problems directly, PoT expresses reasoning as executable code (e.g., Python) for more accurate solutions.

This approach is particularly valuable because LLMs often struggle with:

  • Complex calculations
  • Iterative processes
  • Mathematical precision

Faithful Chain-of-Thought (CoT) Reasoning

Faithful Chain-of-Thought (CoT)8Qing Lyu. (2023). Faithful Chain-of-Thought Reasoning. ensures that LLM reasoning truly reflects the path to the answer. This increases trust and interpretability by guaranteeing that final answers derive directly from the reasoning chain.

Skeleton-of-Thought (SoT) Prompting

Skeleton-of-Thought (SoT) prompting9Xuefei Ning. (2023). Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation. improves efficiency through a two-stage approach:

  1. Generate a basic "skeleton" outline
  2. Expand details in parallel

This method reduces latency while improving answer quality.

Tree of Thoughts (ToT) Prompting

Tree of Thoughts (ToT) prompting10Shunyu Yao. (2023). Tree of Thoughts: Deliberate Problem Solving with Large Language Models. enables exploration of multiple reasoning paths in a tree-like structure. This allows:

  • Multiple solution attempts
  • Path evaluation
  • Backtracking when needed

The technique excels in creative problem-solving, math reasoning, and complex puzzles.

Recursion of Thought (RoT) Prompting

Recursion of Thought (RoT) prompting11Soochan Lee, G. K. (2023). Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models. tackles context length limitations through recursive decomposition. This divide-and-conquer approach is particularly effective for:

  • Large-scale tasks
  • Multi-digit arithmetic
  • Complex sequences

Chain-of-Code (CoC)

Chain-of-Code (CoC)12Li, C., Liang, J., Zeng, A., Chen, X., Hausman, K., Sadigh, D., Levine, S., Fei-Fei, L., Xia, F., & Ichter, B. (2024). Chain of Code: Reasoning with a Language Model-Augmented Code Emulator. https://arxiv.org/abs/2312.04474 is an innovative framework that combines the precision of code execution with the flexibility of language-based reasoning. Unlike other techniques, CoC bridges the gap between semantic reasoning and numerical computation by allowing models to generate and execute a mix of code and natural language.

Conclusion and Next Steps

These advanced prompting techniques represent different approaches to enhancing LLM capabilities through decomposition. Each technique offers unique advantages for specific types of problems while building upon the core principle of breaking down complex tasks.

To get started:

  1. Choose a technique based on your specific needs
  2. Explore the detailed guides for each method
  3. Experiment with different approaches in your projects

Feel free to explore the links provided for deeper dives into each technique. Happy prompting!

🟦 Chain of Code (CoC)

🟒 Chain-of-Logic

🟦 Decomposed Prompting

🟦 Duty-Distinct Chain-of-Thought (DDCoT)

β—† Faithful Chain-of-Thought

🟦 Plan-and-Solve Prompting

🟦 Program of Thoughts

β—† Recursion of Thought

β—† Skeleton-of-Thought

🟦 Tree of Thoughts

Footnotes

  1. Jason Wei. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. ↩

  2. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. ↩

  3. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. ↩

  4. Servantez, S., Barrow, J., Hammond, K., & Jain, R. (2024). Chain of Logic: Rule-Based Reasoning with Large Language Models. https://arxiv.org/abs/2402.10400 ↩

  5. Tushar Khot. (2023). Decomposed Prompting: A Modular Approach for Solving Complex Tasks. ↩

  6. Lei Wang. (2023). Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models. ↩

  7. Wenhu Chen. (2022). Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. ↩

  8. Qing Lyu. (2023). Faithful Chain-of-Thought Reasoning. ↩

  9. Xuefei Ning. (2023). Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation. ↩

  10. Shunyu Yao. (2023). Tree of Thoughts: Deliberate Problem Solving with Large Language Models. ↩

  11. Soochan Lee, G. K. (2023). Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models. ↩

  12. Li, C., Liang, J., Zeng, A., Chen, X., Hausman, K., Sadigh, D., Levine, S., Fei-Fei, L., Xia, F., & Ichter, B. (2024). Chain of Code: Reasoning with a Language Model-Augmented Code Emulator. https://arxiv.org/abs/2312.04474 ↩

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Edit this page

Β© 2025 Learn Prompting. All rights reserved.