Announcing our new Paper: The Prompt Report, with Co-authors from OpenAI & Microsoft!

Check it out →
🧠 AdvancedZero-Shot🟢 Introduction

🟢 Introduction to Zero-Shot Prompting Techniques

Last updated on September 4, 2024 by Valeriia Kuka

Welcome to the zero-shot prompting section of the advanced Prompt Engineering Guide.

Zero-shot prompting is the most basic form of prompting. It simply shows the large language model (LLM) a prompt without examples or demonstrations and asks it to generate a response. You've already seen these techniques in the basics docs like giving instructions and assigning roles.

In this section, you'll explore standalone advanced zero-shot prompting techniques you've never seen before:

  1. Emotion Prompting: Leverages emotional language to improve LLM accuracy and response quality by tapping into emotion-rich training data.

  2. **Re-reading (RE2) **: Enhances reasoning by asking the model to re-read the prompt, ensuring important details aren't missed.

  3. Rephrase and Respond (RaR): Asks the model to rephrase the prompt before answering, reducing ambiguity and improving clarity.

  4. Role Prompting: Assigns roles to the LLM, creating more context-specific, relevant responses for various tasks.

  5. System 2 Attention (S2A): Filters out irrelevant information by having the model refine the prompt, leading to more accurate outputs.

Emotion Prompting

You might think that Emotion Prompting1 could steer the model in the wrong direction, making it biased toward the emotions expressed in the prompt. Surprisingly, emotional prompting can often lead to better or more factual responses, which might not be possible with neutral or mechanical prompts.

On this page, you'll learn how to apply emotion prompting and how it takes advantage of the fact that LLMs are trained on data from diverse domains—such as conversations, poetry, and music—that are rich in emotional language.

Re-reading (RE2)

Does your LLM miss key details in your prompts? Re-reading (RE2)2 can address this by asking the LLM to re-read the prompt before responding. RE2 is versatile and compatible with most thought-eliciting prompting techniques, including Chain-of-Thought (CoT)3.

On this page, you'll discover how Re-reading (RE2) works. Despite its simplicity, RE2 is a powerful and effective technique that consistently boosts LLM reasoning performance by just re-reading the prompt.

Rephrase and Respond (RaR)

The creators of Rephrase and Respond (RaR)4 took prompting to the next level. Instead of seeking a perfect prompt formula, they asked the model to rephrase the question in the prompt before answering it. This technique, particularly effective in question-answering tasks, incorporates rephrasing directly into the prompt.

On this page, you'll learn how to use RaR, explore the two-step RaR approach, and see how it helps resolve ambiguity in prompts.

Role Prompting

Role prompting56 is another simple yet powerful technique where you assign specific roles to the LLM. It's also called role-play prompting7 or persona prompting89. We’ve already touched on this in the giving instructions doc, but here we've taken a more advanced and comprehensive look. This technique is useful for a range of tasks, such as writing, reasoning, and dialogues, and it allows for more context-specific responses.

On this page, you'll learn how to effectively use role prompting and explore best practices to the most out of your role prompting.

System 2 Attention (S2A)

System 2 Attention (S2A)10 improves model accuracy by filtering out irrelevant context from prompts. Instead of manually refining the prompt, S2A asks the model to do it for you, then uses the refined prompt to generate the final output.

On this page, you'll explore the two stages of S2A and how to apply it with practical examples.

SimToM

The creators of SimToM11 drew inspiration from the human ability to understand others’ intentions and predict actions—an ability known as Theory of Mind. SimToM aims to replicate this in LLMs by prompting the model in a way that mimics how humans understand and anticipate behavior.

On this page, you'll learn about the two stages of SimToM and how to apply them effectively.

Conclusion and Next Steps

Zero-shot prompting techniques like emotion prompting, re-reading, and role prompting provide powerful ways to enhance the capabilities of LLMs. Each method addresses specific challenges like improving accuracy, refining reasoning, and filtering irrelevant context.

Be sure to explore the provided links for further insights into each method, and start experimenting with them in your own projects. With the right approach, you can unlock the full potential of LLMs. Happy prompting!

Footnotes

  1. Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large Language Models Understand and Can be Enhanced by Emotional Stimuli. https://arxiv.org/abs/2307.11760

  2. Xu, X., Tao, C., Shen, T., Xu, C., Xu, H., Long, G., & guang Jian-Lou. (2024). Re-Reading Improves Reasoning in Large Language Models. https://arxiv.org/abs/2309.06275

  3. Jason Wei. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.

  4. Deng, Y., Zhang, W., Chen, Z., & Gu, Q. (2024). Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves. https://arxiv.org/abs/2311.04205

  5. Zheng, M., Pei, J., & Jurgens, D. (2023). Is “A Helpful Assistant” the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts. https://arxiv.org/abs/2311.10054

  6. Wang, Z. M., Peng, Z., Que, H., Liu, J., Zhou, W., Wu, Y., Guo, H., Gan, R., Ni, Z., Yang, J., Zhang, M., Zhang, Z., Ouyang, W., Xu, K., Huang, S. W., Fu, J., & Peng, J. (2024). RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models. https://arxiv.org/abs/2310.00746

  7. Kong, A., Zhao, S., Chen, H., Li, Q., Qin, Y., Sun, R., & Zhou, X. (2023). Better Zero-Shot Reasoning with Role-Play Prompting. ArXiv, abs/2308.07702. https://api.semanticscholar.org/CorpusID:260900230

  8. Schmidt, D. C., Spencer-Smith, J., Fu, Q., & White, J. (2023). Cataloging Prompt Patterns to Enhance the Discipline of Prompt Engineering. https://api.semanticscholar.org/CorpusID:257368147

  9. Wang, Z., Mao, S., Wu, W., Ge, T., Wei, F., & Ji, H. (2024). Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. https://arxiv.org/abs/2307.05300

  10. Weston, J., & Sukhbaatar, S. (2023). System 2 Attention (is something you might need too). https://arxiv.org/abs/2311.11829

  11. Wilf, A., Lee, S. S., Liang, P. P., & Morency, L.-P. (2023). Think Twice: Perspective-Taking Improves Large Language Models’ Theory-of-Mind Capabilities. https://arxiv.org/abs/2311.10227

Word count: 0
Copyright © 2024 Learn Prompting.