Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
🤖 Agents
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🗂️ RAG
🎲 Miscellaneous
Models
📝 Language Models
Resources
📙 Vocabulary Resource
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
🧠 AdvancedFew-Shot🟢 Self-Ask

🟢 Self-Ask Prompting

🟢 This article is rated easy
Reading Time: 4 minutes

Last updated on September 27, 2024


Takeaways
  • Self-Ask improves LLM reasoning by breaking down complex questions into sub-questions and answering them step by step.
  • Enhances tasks like customer support, legal analysis, research, and creative writing by prompting follow-up questions.
  • Can integrate with external resources like search engines for more accurate responses.
  • Limitations: Effectiveness depends on the model’s ability to generate relevant sub-questions; struggles with abstract queries.

What is Self-Ask?

The creators of Self-Ask explored how much of a Large Language Model (LLM) correct answers are due to reasoning between facts rather than just memorization. Their intuition was that prompting a model to ask follow-up questions to break down the initial query would enhance its performance on reasoning tasks.

Let's look at an example. If you ask an LLM the following questions, it will likely provide correct answers:

  • Who won the Master's Tournament in 1994?
  • When Justin Bieber was born?

These questions only require the model to recall facts it encountered during training.

However, if you combine these questions and ask, "Who won the Master’s Tournament the year Justin Bieber was born?", the difficulty increases. This question merges commonly known facts in an unexpected way. While it may seem straightforward to us, the model must reason about these facts and make connections rather than simply recalling them.

Self-Ask capitalizes on the idea that to answer a complex question, a model needs to break it down into simpler sub-questions, answer them, and then synthesize a final answer. Similar to Chain-of-Thought (CoT) prompting, Self-Ask decomposes the question into a step-by-step process. However, unlike CoT, Self-Ask prompts the model to explicitly state the next follow-up question before answering the main question in the prompt.

How to Use Self-Ask?

To use Self-Ask, you need to prepare a One- or Few-Shot prompt that demonstrates how to answer the questions. In other words, you need to prepare examples of how a complex question is broken down into simpler sub-questions and the right answers to each question. Here's a prompt template you can use:

Astronaut

Prompt


Question: {A complex question}

Are follow up questions needed here: Yes.

Follow up: {Sub-question 1} Intermediate answer: {Correct answer to sub-question 1}

Follow up: {Sub-question 2} Intermediate answer: {Correct answer to sub-question 2}

So the final answer is: {Correct answer to the complex question}

Question: {Your prompt with a complex question}

Are follow up questions needed here:

Interestingly, Self-Ask can be integrated with a search engine or even a database. This allows the LLM to base its answers on, or retrieve information directly from, a trusted source. The structure of Self-Ask, which clearly separates the beginning and end of each sub-question, is ideal for such integration.

Now, let’s explore some potential applications of Self-Ask.

Complex Customer Support

Imagine you want to automate customer support and there are cases when customers ask complex questions that require a combination of knowledge to be answered. For example, in tech support or troubleshooting the question could be: "Why is my internet connection slow, and how can I improve it?" You may want to integrate the relevant examples at the beginning of each customer query to use Self-Ask and give your customer more accurate and helpful responses.

Astronaut

Prompt


Question: Who lived longer, Theodor Haecker or Harry VaughanWatkins?

Are follow up questions needed here: Yes.

Follow up: How old was Theodor Haecker when he died? Intermediate answer: Theodor Haecker was 65 years old when he died.

Follow up: How old was Harry Vaughan Watkins when he died? Intermediate answer: Harry Vaughan Watkins was 69 years old when he died.

So the final answer is: Harry Vaughan Watkins

Question: Why is my internet connection slow, and how can I improve it?

Are follow up questions needed here:

Robot

AI Output


Yes, follow-up questions are needed here.

To better understand the issue and provide appropriate suggestions, follow-up questions could include:

- When do you experience the slow internet connection? Is it at specific times of day, or is it consistent? - What type of internet connection are you using? Is it Wi-Fi, Ethernet, or mobile data? - How many devices are connected to your network? Are they all in use when the connection slows down? - What is your internet plan speed? Are you getting the speed you’re paying for? - Have you recently added any new devices or changed your setup? Could these be affecting your connection? - Do you experience slow speeds on all devices, or just specific ones? This could help identify if the issue is with the device itself. - Have you tried restarting your router or modem? This is a common troubleshooting step that can resolve temporary issues.

Answering these questions will help diagnose the root cause of the slow connection and provide targeted solutions.

Research and Academic Assistance

If you're a student or researcher, LLMs can serve as valuable research assistants. By using the Self-Ask method, you can critically analyze a research paper's findings. For instance, you might start by asking, "Does this research paper provide sufficient evidence to support its conclusions?" This can then lead to more specific sub-questions like, "What methodology was used to gather data?" or "Are there any gaps in the evidence presented?" This way, you can dig deeper and make sure LLM notices the details all the way through the process.

Legal Document Analysis

LLMs can also help you make sense of tricky legal documents, whether you're reviewing a job contract or any other agreement. Say you're wondering, "Is there a conflict between Clause A and Clause B?" The Self-Ask method helps break this down: "What exactly does Clause A say?" and "How does Clause B relate?" By tackling each piece step by step, the LLM can give you a clear and thorough analysis, making sure you don’t miss anything important.

Creative Content Generation

If you're a writer working on a story or a first draft, the Self-Ask method can help shape your ideas into a solid narrative. Start with a big question like, "What's the main plot of my story?" and then break it down into smaller questions like, "Who are the key characters?" and "What's the main conflict?" By guiding the LLM through these steps, you’ll end up with a more cohesive and well-structured story.

Limitations of Self-Ask

Self-Ask was not tested on benchmarks with arithmetic problems or logical puzzles, although some manual testing showed that Self-Ask also works on those very different problem sets.

Other potential limitations:

  • The effectiveness of Self-Ask relies on the model’s ability to decompose complex questions into sub-questions.
  • Self-Ask might struggle with queries that are more abstract and opinion-based and don't require factual reasoning.

Conclusion

Self-Ask is a powerful way to improve how Large Language Models handle complex questions. By breaking down big questions into smaller steps, it helps models think more logically and deliver more accurate answers.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Press, O., Zhang, M., Min, S., Schmidt, L., Smith, N. A., & Lewis, M. (2023). Measuring and Narrowing the Compositionality Gap in Language Models. https://arxiv.org/abs/2210.03350

Edit this page
Word count: 0

Get AI Certified by Learn Prompting


Copyright © 2024 Learn Prompting.