Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedDecomposition🟦 Decomposed Prompting

🟦 Decomposed (DecomP) Prompting

Reading Time: 4 minutes

Last updated on September 27, 2024

Takeaways
  • Understand the limitations of Few-Shot prompting.
    • Understand the working mechanism and advantages of Decomposed Prompting.

Introduction

Few-Shot Prompting allows Large Language Models (LLMs) to solve problems without explicit fine-tuning. However, this approach struggles as task complexity increases. Decomposed Prompting or DecomP is a modular approach that tackles complex tasks by breaking them into simpler sub-tasks, delegating each sub-task to LLMs or other handlers better suited to solve them.

How to Use Decomposed Prompting?

First, a decomposer prompt outlines the process of solving a complex task through smaller sub-tasks. Each of these sub-tasks is then handled by specific sub-task handlers. These handlers can:

  • Use Decomposed Prompting to further break down the task,
  • Use a simple prompt to solve the sub-task, or
  • Apply a function to handle the sub-task.

There are three key advantages to this technique:

  1. Each sub-task handler can be given richer, more targeted exemplars, leading to more accurate responses.
  2. Complex sub-tasks can be further simplified and solved.
  3. Sub-task handlers can be reused across multiple tasks.

DecomP in Action: Letter Concatenation

Let’s consider an example of Decomposed Prompting in action. Suppose we need to concatenate the first letter of each word in a string, using spaces as separators. This can be achieved by breaking the problem into three sub-tasks:

  1. Split the string into a list of words.
  2. Extract the first letter from each word.
  3. Concatenate the extracted letters, using spaces as separators.

First, the decomposer specifies the sequence of questions and corresponding sub-tasks:

Astronaut

Prompt


QC: Concatenate the first letter of every word in "Jack Ryan" using spaces Q1: [split] What are the words in "Jack Ryan"? #1: ["Jack". "Ryan"] Q2: (foreach) [str_pos] What is the first letter of #1? #2: ["J", "R"] Q3: [merge] Concatenate #2 with spaces #3: "J R" Q4: [EOQ]

Here, [EOQ] indicates the end of the question. Each sub-task is processed by its handler, with the solution from one sub-task passed to the next. For example, the result of Q1 ("Jack", "Ryan") is used as input for Q2 in this case.

In the example, the decomposer relies on two task-specific handlers:

  • split: Finds character positions in strings
  • str_pos: Concatenates characters

The Few-Shot prompts for these task-specific handlers might look like this:

  • split
Astronaut

Prompt


Q: What are the words in "Elon Musk Tesla"? A: ["Elon", "Musk", "Tesla"] Q: What are the letters in "C++"? A: ["C", "+", "+"] ...

  • str_pos
Astronaut

Prompt


Q: Concatenate ["n", "i", "e"] A: "nie" Q: Concatenate ["n", "i", "c", "e"] using spaces A: "n i c e" ...

Once all the sub-tasks are executed, the last answer before [EOQ] is returned as the final response.

A more detailed representation of the prompt execution and inference procedure is provided in the example below:

Inference procedure in Decomposed Prompting

The decomposer prompt determines the first sub-task to be completed: splitting words in this case. The sub-task is handled by the split sub-task handler and the answer generated is appended to the decomposer prompt to get the second sub-task. The process continues until the decomposer prompt produces [EOQ]. At this point, there are no more tasks left and the last answer is returned as the solution.

What Are Decomposed Prompting Results?

  • Decomposed Prompting outperforms and generalizes better than Chain-of-Thought (CoT) Prompting and Least-to-Most prompting in exact match (EM) results on kthk^{th} letter concatenation task, with k=3k=3 and using space as delimiter implying sub-task-specific prompts are more effective at teaching hard sub-tasks than a single CoT prompt.

Comparision of the model on k-th letter concatenation task

  • Decomposed Prompting with Chain-of-Thought further increases the ability of the model to generalize to longer sequence lengths.

Comparision of the model on sequence reversal task

Conclusion

While few-shot prompting is an effective technique to improve the performance of LLMs, it can easily fail when the task to be performed is more complex than the exemplars fed to the LLM. Decomposed prompting, on the other hand, can enhance the LLM's performance by decomposing the task into sub-tasks and delegating each sub-task to the task-specific handler which can be another LLM, a function, or a trained model.

Bhuwan Bhatt

Bhuwan Bhatt, a Machine Learning Engineer with over 5 years of industry experience, is passionate about solving complex challenges at the intersection of machine learning and Python programming. Bhuwan has contributed his expertise to leading companies, driving innovation in AI/ML projects. Beyond his professional endeavors, Bhuwan is deeply committed to sharing his knowledge and experiences with others in the field. He firmly believes in continuous improvement, striving to grow by 1% each day in both his technical skills and personal development.

Footnotes

  1. Tushar Khot. (2023). Decomposed Prompting: A Modular Approach for Solving Complex Tasks. ↩ ↩2 ↩3 ↩4

Copyright Β© 2024 Learn Prompting.