Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
🧙‍♂️ Weiterführendes🟢 Zero Shot Chain of Thought

🟢 Zero Shot Chain of Thought

Last updated on August 7, 2024 by Sander Schulhoff

Zero Shot Chain of Thought (Zero-shot-CoT) prompting1 is a follow up to CoT prompting2, which introduces an incredibly simple zero shot prompt. They find that by appending the words "Let's think step by step." to the end of a question, LLMs are able to generate a chain of thought that answers the question. From this chain of thought, they are able to extract more accurate answers.

Zero Shot CoT (Kojima et al.)

Technically, the full Zero-shot-CoT process involves two separate prompts/completions. In the below image, the top bubble on the left generates a chain of thought, while the top bubble on the right takes in the output from the first prompt (including the first prompt itself), and extracts the answer from the chain of thought. This second prompt is a self augmented prompt.

Full Zero Shot CoT Process (Kojima et al.)

Example

Here are a few demos (which only perform reasoning extraction). This first demo shows GPT-3 (davinci-003) failing a simple math question, while the second demo uses a Zero-shot-CoT prompt and successfully solves the problem. Feel free to enter your OpenAI API key (Click Generate) and play around with the examples. Note how much simpler the Zero-shot-CoT prompt is compared to the CoT prompt.

Incorrect

Correct

Results

Zero-shot-CoT was also effective in improving results on arithmetic, commonsense, and symbolic reasoning tasks. However, unsurprisingly, it was usually not as effective as CoT prompting. An important use case for Zero-shot-CoT is when obtaining few shot examples for CoT prompting is difficult.

Ablations of Interest

Kojima et al. experiment with a number of different Zero-shot-CoT prompts (e.g. "Let’s solve this problem by splitting it into steps." or "Let’s think about this logically."), but they find that "Let's think step by step" is most effective for their chosen tasks.

Notes

The extraction step often must be task specific, making Zero-Shot-CoT less generalizable than it appears at first.

Anecdotally, I've found that Zero-shot-CoT style prompts are sometimes effective in improving the length of completions for generative tasks. For example, consider the standard prompt Write a story about a frog and a mushroom who become friends. Appending the words Let's think step by step. to the end of this prompt leads to a much longer completion.

Footnotes

  1. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners.

  2. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models.

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.