Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🧠 Advanced
🌱 New Techniques🟦 Logic-of-Thought (LoT)

🟦 Logic-of-Thought (LoT)

Last updated on October 3, 2024 by Valeriia Kuka
Overview of Logic-of-Thought (LoT) Prompting

What is Logic-of-Thought (LoT)?

Logic-of-Thought (LoT)1 is a novel technique designed to improve the logical reasoning abilities of large language models (LLMs). While LLMs are highly effective across many tasks, they struggle with complex logical reasoning, especially when using traditional methods like Chain-of-Thought (CoT).

LoT addresses these challenges by injecting formal propositional logic into prompts, guiding LLMs through more accurate reasoning processes. It adds logical information to input prompts, avoiding the information loss that often occurs when LLMs attempt symbolic reasoning.

How does LoT work?

LoT operates in three phases to augment input prompts with logical reasoning:

  1. Logic Extraction: LoT uses LLMs to extract logical propositions and relationships from the input. It identifies key conditional or logical connections between elements of the context.

  2. Logic Extension: Logical expressions extracted from the first phase are expanded using formal logic rules (e.g., Transitive Law, Contraposition Law). This ensures that logical deductions are complete and align with human intuition.

  3. Logic Translation: The expanded logical expressions are translated back into natural language. This augmented information is then combined with the original input, enhancing the prompt and helping the LLM to reason more accurately.

For example, if the input contains statements about a person reading a book, LoT might extract logical propositions such as "If a person reads a book, they become smarter" and ensure this information is added to the LLM's reasoning process.

How LoT differs from existing techniques

LoT improves on existing methods like Chain-of-Thought (CoT), Self-Consistency (SC), and Tree-of-Thoughts (ToT) by ensuring that logical information is systematically extracted and applied. Here's how it compares:

  • Chain-of-Thought (CoT): CoT adds intermediate reasoning steps but sometimes generates unfaithful conclusions. LoT addresses this by grounding reasoning steps in formal logic, reducing errors.

  • Neuro-symbolic approaches: Methods like LINC or SatLM combine LLMs with symbolic reasoning tools. However, these methods can lose information when converting problems into logical expressions. LoT avoids this by directly augmenting prompts without relying on external tools.

  • Tree-of-Thoughts (ToT): ToT explores multiple branches of reasoning, but LoT can further enhance this process by ensuring logical coherence within those branches.

Benefits and Applications

LoT is particularly useful for tasks requiring robust logical reasoning, such as solving puzzles, legal reasoning, or question-answering on standardized tests. It can be applied in scenarios where logical consistency is crucial, and it performs well even in tasks with complex reasoning layers.

How to use LoT

Here’s an example process using LoT:

Phase 1. Logic Extraction Prompt

Astronaut

Prompt


Please use uppercase English letters such as A, B, C, etc. to identify all possible propositions. Do not include negative tones such as "not" in the propositions. For example, if the sentence is "It is not bored," you should use "A: bored" to represent it.

Next, for each proposition, use the symbol to represent its negative form. For example, the negative form of proposition A can be expressed as A.

Now, please carefully analyze the context and find causal relationship between propositions seriously. A causal expression is only established when the context directly supports this relationship. Use arrows (→) to indicate causal relationships, for example, "If A, then B", "B if A" and "A causes B" etc. can be represented as A → B.

Finally, output propositions and causal expressions.

[You input]

Phase 2. Logic Extention

Logical expressions extracted from the first phase are expanded using formal logic rules (e.g., Transitive Law, Contraposition Law). This ensures that logical deductions are complete and align with human intuition.

Phase 3. Logic Translation Prompt

Astronaut

Prompt


[Output from Phase 2]

Please use the provided propositions to translate each expression into a complete sentence.

¬A represents the negation of proposition A, the arrow (→) represents the causal relationship, and A → B represents if A, then B.

Only output the sentences in a paragraph!

Results of LoT

LoT has been tested across several logical reasoning datasets, showing significant improvements over baseline methods like CoT and ToT. The following table summarizes the performance gains when LoT is combined with other prompting methods:

DatasetCoTLoT + CoTSC(5)LoT + SC(5)ToTLoT + ToT
ReClor52.17+4.3556.52+2.1858.70+2.17
LogiQA34.00+2.5036.60+1.4034.50+5.00
RuleTaker60.70+0.9059.00+1.0065.50+0.00
ProofWriter58.80+2.7057.50+2.5061.50+6.00
FOLIO78.00+0.0076.00+2.6080.00+0.00
  • ReClor Dataset: LoT improves Chain-of-Thought performance by +4.35% and boosts Self-Consistency by +2.18%.
  • LogiQA Dataset: LoT boosts performance by +2.50% over CoT and +1.40% over SC.
  • ProofWriter Dataset: LoT shows the highest gains on this dataset, improving ToT by +6%.

Conclusion

Logic-of-Thought (LoT) is a powerful approach for injecting formal logic into large language models, enhancing their ability to handle complex reasoning tasks. By systematically extracting, extending, and translating logical information into natural language, LoT augments existing methods like CoT and ToT, improving accuracy and reducing errors in logical reasoning tasks. This technique is particularly valuable for applications requiring precise logical deductions, such as legal reasoning or standardized test question-answering.

Footnotes

  1. Liu, T., Xu, W., Huang, W., Wang, X., Wang, J., Yang, H., & Li, J. (2024). Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models. https://arxiv.org/abs/2409.17539

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.