Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
🤖 Agents
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🗂️ RAG
🎲 Miscellaneous
Models
📝 Language Models
Resources
📙 Vocabulary Resource
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
🧠 AdvancedZero-Shot🟢 Introduction

🟢 Introduction to Zero-Shot Prompting Techniques

Reading Time: 4 minutes
Last updated on September 27, 2024

Valeriia Kuka

Welcome to the Zero-Shot prompting section of the advanced Prompt Engineering Guide.

Zero-Shot prompting is the most basic form of prompting. It simply shows the Large Language Model (LLM) a prompt without examples or demonstrations and asks it to generate a response. You've already seen these techniques in the Basics docs like Giving Instructions and Assigning Roles.

In this section, you'll explore standalone advanced Zero-Shot Prompting techniques you've never seen before:

  1. Emotion Prompting: Leverages emotional language to improve LLM accuracy and response quality by tapping into emotion-rich training data.

  2. Re-reading (RE2): Enhances reasoning by asking the model to re-read the prompt, ensuring important details aren't missed.

  3. Rephrase and Respond (RaR): Asks the model to rephrase the prompt before answering, reducing ambiguity and improving clarity.

  4. Role Prompting: Assigns roles to the LLM, creating more context-specific, relevant responses for various tasks.

  5. System 2 Attention (S2A): Filters out irrelevant information by having the model refine the prompt, leading to more accurate outputs.

  6. SimToM: enhances LLMs' ability to understand and predict human thoughts and actions.

Emotion Prompting

You might think that Emotion Prompting could steer the model in the wrong direction, making it biased toward the emotions expressed in the prompt. Surprisingly, emotional prompting can often lead to better or more factual responses, which might not be possible with neutral or mechanical prompts.

On the Emotion Prompting page, you'll learn how to apply emotion prompting and how it takes advantage of the fact that LLMs are trained on data from diverse domains—such as conversations, poetry, and music—that are rich in emotional language.

Re-reading (RE2)

Does your LLM miss key details in your prompts? Re-reading (RE2) can address this by asking the LLM to re-read the prompt before responding. RE2 is versatile and compatible with most thought-eliciting prompting techniques, including Chain-of-Thought (CoT) Prompting.

On the Re-reading page, you'll discover how Re-reading (RE2) works. Despite its simplicity, RE2 is a powerful and effective technique that consistently boosts LLM reasoning performance by just re-reading the prompt.

Rephrase and Respond (RaR)

The creators of Rephrase and Respond (RaR) took prompting to the next level. Instead of seeking a perfect prompt formula, they asked the model to rephrase the question in the prompt before answering it. This technique, particularly effective in question-answering tasks, incorporates rephrasing directly into the prompt.

On the Rephrase and Respond page, you'll learn how to use RaR, explore the two-step RaR approach, and see how it helps resolve ambiguity in prompts.

Role Prompting

Role prompting is another simple yet powerful technique where you assign specific roles to the LLM. It's also called role-play prompting or persona prompting. We’ve already touched on this in the giving instructions doc, but here we've taken a more advanced and comprehensive look. This technique is useful for a range of tasks, such as writing, reasoning, and dialogues, and it allows for more context-specific responses.

On the Role Prompting page, you'll learn how to effectively use role prompting and explore best practices to the most out of your role prompting.

System 2 Attention (S2A)

System 2 Attention (S2A) improves model accuracy by filtering out irrelevant context from prompts. Instead of manually refining the prompt, S2A asks the model to do it for you, then uses the refined prompt to generate the final output.

On the System 2 Attention page, you'll explore the two stages of S2A and how to apply it with practical examples.

SimToM

The creators of SimToM drew inspiration from the human ability to understand others’ intentions and predict actions—an ability known as Theory of Mind. SimToM aims to replicate this in LLMs by prompting the model in a way that mimics how humans understand and anticipate behavior.

On the SimToM page, you'll learn about the two stages of SimToM and how to apply them effectively.

Conclusion and Next Steps

Zero-Shot Prompting techniques like Emotion Prompting, Re-reading, and Role Prompting provide powerful ways to enhance the capabilities of LLMs. Each method addresses specific challenges like improving accuracy, refining reasoning, and filtering irrelevant context.

Be sure to explore the provided links for further insights into each method, and start experimenting with them in your own projects. With the right approach, you can unlock the full potential of LLMs. Happy prompting!

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

🟢 Emotion Prompting

🟢 Re-reading (RE2)

🟢 Rephrase and Respond (RaR)

🟢 Role Prompting

◆ System 2 Attention (S2A)

🟦 SimToM

Footnotes

  1. Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large language models understand and can be enhanced by emotional stimuli. arXiv Preprint arXiv:2307.11760.

  2. Xu, X., Tao, C., Shen, T., Xu, C., Xu, H., Long, G., & guang Jian-Lou. (2024). Re-Reading Improves Reasoning in Large Language Models. https://arxiv.org/abs/2309.06275 2

  3. Deng, Y., Zhang, W., Chen, Z., & Gu, Q. (2024). Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves. https://arxiv.org/abs/2311.04205 2

  4. Zheng, M., Pei, J., & Jurgens, D. (2023). Is “A Helpful Assistant” the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts. https://arxiv.org/abs/2311.10054 2

  5. Weston, J., & Sukhbaatar, S. (2023). System 2 Attention (is something you might need too). https://arxiv.org/abs/2311.11829 2

  6. Wilf, A., Lee, S. S., Liang, P. P., & Morency, L.-P. (2023). Think Twice: Perspective-Taking Improves Large Language Models’ Theory-of-Mind Capabilities. https://arxiv.org/abs/2311.10227 2

  7. Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large Language Models Understand and Can be Enhanced by Emotional Stimuli. https://arxiv.org/abs/2307.11760

  8. Jason Wei. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.

  9. Wang, Z. M., Peng, Z., Que, H., Liu, J., Zhou, W., Wu, Y., Guo, H., Gan, R., Ni, Z., Yang, J., Zhang, M., Zhang, Z., Ouyang, W., Xu, K., Huang, S. W., Fu, J., & Peng, J. (2024). RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models. https://arxiv.org/abs/2310.00746

  10. Kong, A., Zhao, S., Chen, H., Li, Q., Qin, Y., Sun, R., & Zhou, X. (2023). Better Zero-Shot Reasoning with Role-Play Prompting. ArXiv, abs/2308.07702. https://api.semanticscholar.org/CorpusID:260900230

  11. Schmidt, D. C., Spencer-Smith, J., Fu, Q., & White, J. (2023). Cataloging Prompt Patterns to Enhance the Discipline of Prompt Engineering. https://api.semanticscholar.org/CorpusID:257368147

  12. Wang, Z., Mao, S., Wu, W., Ge, T., Wei, F., & Ji, H. (2024). Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. https://arxiv.org/abs/2307.05300