Announcing our new Paper: The Prompt Report, with Co-authors from OpenAI & Microsoft!

Check it out →
🧠 AdvancedDecomposition🟦 Decomposed Prompting

🟦 Decomposed (DECOMP) Prompting

Last updated on August 19, 2024 by Bhuwan Bhatt
Takeaways
  • Understand the limitations of few-shot prompting.
  • Understand the working mechanism and advantages of decomposed prompting.

Introduction

Few-shot prompting allows Large Language Models (LLMs) to solve problems without explicit fine-tuning. However, this approach struggles as task complexity increases. Decomposed Prompting1 or DECOMP is a modular approach that tackles complex tasks by breaking them into simpler sub-tasks, delegating each sub-task to LLMs or other handlers better suited to solve them.

How to Use Decomposed Prompting?

First, a decomposer prompt outlines the process of solving a complex task through smaller sub-tasks. Each of these sub-tasks is then handled by specific sub-task handlers. These handlers can:

  • Use decomposed prompting to further break down the task,
  • Use a simple prompt to solve the sub-task, or
  • Apply a function to handle the sub-task.

There are three key advantages to this technique:

  1. Each sub-task handler can be given richer, more targeted exemplars, leading to more accurate responses.
  2. Complex sub-tasks can be further simplified and solved.
  3. Sub-task handlers can be reused across multiple tasks.

DECOMP in Action: Letter Concatenation

Let’s consider an example of Decomposed Prompting in action. Suppose we need to concatenate the first letter of each word in a string, using spaces as separators. This can be achieved by breaking the problem into three sub-tasks:

  1. Split the string into a list of words.
  2. Extract the first letter from each word.
  3. Concatenate the extracted letters, using spaces as separators.

First, the decomposer specifies the sequence of questions and corresponding sub-tasks:

Astronaut

Prompt


QC: Concatenate the first letter of every word in "Jack Ryan" using spaces Q1: [split] What are the words in "Jack Ryan"? #1: ["Jack". "Ryan"] Q2: (foreach) [str_pos] What is the first letter of #1? #2: ["J", "R"] Q3: [merge] Concatenate #2 with spaces #3: "J R" Q4: [EOQ]

Here, [EOQ] indicates the end of the question. Each sub-task is processed by its handler, with the solution from one sub-task passed to the next. For example, the result of Q1 ("Jack", "Ryan") is used as input for Q2 in this case.

In the example, the decomposer relies on two task-specific handlers:

  • split: Finds character positions in strings
  • str_pos: Concatenates characters

The few-shot prompts for these task-specific handlers might look like this:

  • split
Astronaut

Prompt


Q: What are the words in "Elon Musk Tesla"? A: ["Elon", "Musk", "Tesla"] Q: What are the letters in "C++"? A: ["C", "+", "+"] ...

  • str_pos
Astronaut

Prompt


Q: Concatenate ["n", "i", "e"] A: "nie" Q: Concatenate ["n", "i", "c", "e"] using spaces A: "n i c e" ...

Once all the sub-tasks are executed, the last answer before [EOQ] is returned as the final response.

A more detailed representation of the prompt execution and inference procedure is provided in the example below:

Inference procedure in Decomposed Prompting1

The decomposer prompt determines the first sub-task to be completed: splitting words in this case. The sub-task is handled by the split sub-task handler and the answer generated is appended to the decomposer prompt to get the second sub-task. The process continues until the decomposer prompt produces [EOQ]. At this point, there are no more tasks left and the last answer is returned as the solution.

What Are Decomposed Prompting Results?

  • Decomposed Prompting outperforms and generalizes better than Chain-of-Thoughts (CoT) and Least-to-most prompting in exact match (EM) results on kthk^{th} letter concatenation task, with k=3k=3 and using space as delimiter implying sub-task-specific prompts are more effective at teaching hard sub-tasks than a single CoT prompt.

Comparision of the model on k-th letter concatenation task1

  • Decomposed Prompting with CoT further increases the ability of the model to generalize to longer sequence lengths.

Comparision of the model on sequence reversal task1

Conclusion

While few-shot prompting is an effective technique to improve the performance of LLMs, it can easily fail when the task to be performed is more complex than the exemplars fed to the LLM. Decomposed prompting, on the other hand, can enhance the LLM's performance by decomposing the task into sub-tasks and delegating each sub-task to the task-specific handler which can be another LLM, a function, or a trained model.

Footnotes

  1. Tushar Khot. (2023). Decomposed Prompting: A Modular Approach for Solving Complex Tasks. 2 3 4

Word count: 0
Copyright © 2024 Learn Prompting.