Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
βš–οΈ Reliability
πŸ”“ Prompt Hacking
πŸ–ΌοΈ Image Prompting
🌱 New Techniques
πŸ”§ Models
πŸ—‚οΈ RAG
πŸ€– Agents
πŸ’ͺ Prompt Tuning
πŸ” Language Model Inversion
πŸ”¨ Tooling
πŸ“™ Vocabulary Resource
🎲 Miscellaneous
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
🧠 AdvancedFew-Shot🟒 Introduction

🟒 Introduction to Few-Shot Prompting Techniques

Reading Time: 1 minute
Last updated on March 25, 2025

Valeriia Kuka

Welcome to the Few-Shot section of the advanced Prompt Engineering Guide.

While Zero-Shot prompting is the most basic form of interactionβ€”where the Large Language Model (LLM) Few-Shot prompting techniques it a step further. Few-Shot prompting provides the model with example pairs of problems and their correct solutions. These examples help the model better understand the context and improve its response generation.

Here are a couple of techniques we've already explored, with more on the way!

  • K-Nearest Neighbor (KNN) Prompting selects relevant examples by finding the most similar cases to the input query, improving the accuracy of Few-Shot prompts.
  • Self-Ask Prompting breaks down complex questions into sub-questions, and helps LLMs reason more effectively and provide better answers.
  • Prompt Mining selects the optimal prompt template for a given task from a corpus of text based on the template that comes up most often in the corpus.
  • Vote-K Prompting selects diverse and representative exemplars from unlabeled datasets for Few-Shot prompts.

Stay tuned for more advanced techniques, including:

  • SG-ICL

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

🟒 Chain-of-Dictionary (CoD)

🟦 Chain of Knowledge (CoK)

🟒 Cue-CoT

β—† K-Nearest Neighbor (KNN)

β—†β—† Prompt Mining

🟒 Self-Ask

🟒 Self Generated In-Context Learning (SG-ICL)

β—†β—† Vote-K