Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🧠 Advanced
🧠 AdvancedThought Generation🟦 Tabular Chain of Thought (Tab-CoT)

🟦 Tabular Chain of Thought (Tab-CoT) Prompting

Last updated on October 3, 2024 by Valeriia Kuka
Overview of Tabular Chain of Thought (Tab-CoT) Prompting

Information and Links

TechniqueInstitutionDate of PublicationPaperCode
Tabular Chain of Thought Prompting (Tab-CoT)StatNLP Research Group, Singapore University of Technology and DesignMay 2023Tab-CoT: Zero-shot Tabular Chain of ThoughtCode

What is Tabular Chain of Thought Prompting (Tab-CoT)?

Tabular Chain of Thought Prompting (Tab-CoT)1 is a novel approach to Chain-of-Thought (CoT) prompting. Tab-CoT suggest to structure the reasoning process in CoT in the form of a table.

Unlike traditional CoT methods that rely on verbose natural language prompts, Tab-CoT leverages the power of tables. This allows large language models (LLMs) to reason in a two-dimensional format, ensuring consistency and facilitating a more organized thought process.

How Tab-CoT Differs from Existing Techniques

  1. Zero-shot CoT vs. Tab-CoT: Zero-shot CoT uses “Let’s think step by step” to guide the LLM through reasoning. However, these methods tend to be verbose and often result in less organized outputs. In contrast, Tab-CoT generates concise, structured reasoning steps in a table format. It allows for 2-dimensional reasoning, enabling the model to check for consistency across both rows and columns.

  2. CoT vs. Tab-CoT: In CoT, human-engineered reasoning demonstrations are used to guide the model. While this method can yield high performance, it requires significant effort to manually create task-specific examples. Tab-CoT removes this need by automatically generating the reasoning structure in a table, making it more scalable across various tasks without manual intervention.

How Tab-CoT Works

Tab-CoT encourages the LLM to captured its reasoning as a series of steps in a table format.

The table typically has the following columns:

  • Step: Represents the current reasoning step.
  • Subquestion: A sub-question the model aims to answer at each step.
  • Process: The reasoning or calculation performed at that step.
  • Result: The final answer for that step.

This format breaks down complex problems into manageable steps, enabling the model to "think" in a structured way before generating the final answer.

Astronaut

Problem


A chef needs to cook 9 potatoes. He has already cooked 7. If each potato takes 3 minutes to cook, how long will it take him to cook the rest?

Tab-CoT's Table:

StepSubquestionProcessResult
1How many potatoes are left to cook?9 - 7 = 22
2How many minutes will it take?2 * 3 minutes6

This table allows LLMs to provide a more organized and efficient reasoning process compared to standard CoT, which may involve verbose, unstructured explanations.

How to Use Tab-CoT

To use Tab-CoT, follow these steps:

Step 1. Formulating the Table

The reasoning is structured in a table format with predefined columns that reflect the step-by-step thinking process.

Here's the prompt template:

Astronaut

Table Generation Prompt


[Your Question]

|step|subquestion|procedure|result|

Step 2. Answer Extraction

Once the table is generated, a final prompt like “The answer is” can be used to extract the result from the completed table. This ensures that the model provides the final answer after performing all reasoning steps.

Astronaut

Answer Extraction Prompt


Therefore, the answer is

Tip

The code for Auto-CoT is open-sourced by researchers from StatNLP Research Group, Singapore University of Technology and Design, and available for further research and implementation on GitHub.

The code and examples for implementing Tab-CoT are open-sourced at GitHub.

Results of Tab-CoT

Tab-CoT has been evaluated on multiple reasoning tasks, including arithmetic, symbolic, and commonsense reasoning tasks. Below are some key results from experiments comparing Zero-shot CoT and Tab-CoT:

TaskZero-shot CoTTab-CoT
SingleEq78.0%81.9%
AddSub69.6%70.9%
MultiArith78.7%81.2%
GSM8K40.7%44.4%
AQUA33.5%37.0%
SVAMP62.1%60.5%
  • Efficiency: Tab-CoT reduces the number of tokens generated while maintaining or improving performance across most tasks.
  • Scalability: It works well in zero-shot and few-shot settings without needing manual design of task-specific examples.
  • Improved Reasoning: Tab-CoT’s structured table approach captures both vertical (step-wise) and horizontal (cross-step) reasoning, which can result in more accurate final answers.

Conclusion

Tab-CoT presents a significant advancement in CoT prompting methods by introducing a highly structured, tabular approach to reasoning. It offers a concise, scalable, and effective solution for reasoning tasks, outperforming traditional CoT methods in several cases. As LLMs continue to evolve, Tab-CoT's table-based reasoning structure could become a standard for promoting structured reasoning in language models.

Footnotes

  1. Jin, Z., & Lu, W. (2023). Tab-CoT: Zero-shot Tabular Chain of Thought. https://arxiv.org/abs/2305.17812

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.