📄️ 🟢 Chain of Thought Prompting
Chain of Thought (CoT) prompting (@wei2022chain) is a recently developed prompting
📄️ 🟢 Zero Shot Chain of Thought
Zero Shot Chain of Thought (Zero-shot-CoT) prompting (@kojima2022large) is a
📄️ 🟡 Self-Consistency
Self-consistency(@wang2022selfconsistency) is an approach that simply asks a model the same prompt multiple times and takes the majority result as the final answer. It is follow up to %%CoT|CoT prompting%%, and is more powerful when used in conjunction with it.
📄️ 🟡 Generated Knowledge
The idea behind the generated knowledge approach(@liu2021generated) is to ask the %%LLM|LLM%% to generate potentially useful information about a given question/prompt before generating a final response.
📄️ 🟡 Least to Most Prompting
Least to Most prompting (LtM)(@zhou2022leasttomost) takes %%CoT prompting|CoT prompting%% a step further by first breaking a problem into sub problems then solving each one. It is a technique inspired by real-world educational strategies for children.
📄️ 🟡 Dealing With Long Form Content
Dealing with long form content can be difficult, as models have limited context length. Let's learn some strategies for effectively handling long form content.
📄️ 🟡 Revisiting Roles
Accuracy Boost in Newer Models
📄️ 🟢 What's in a Prompt?
When crafting prompts for language learning models (LLMs), there are several factors to consider. The format and labelspace both play crucial roles in the effectiveness of the prompt.