Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🤖 Đại lý
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🎲 Miscellaneous
Models
📙 Vocabulary Reference
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits

Vocabulary Reference

📙 This article is rated
Reading Time: 3 minutes

Last updated on August 7, 2024

Please refer to this page for a list of terms and concepts that we will use throughout this course.

Large Language Models (LLMs), Pretrained Language Models (PLMs), Language Models (LMs), and foundation models

These terms all refer more or less to the same thing: large AIs (neural networks), which have usually been trained on a huge amount of text.

Masked Language Models (MLMs)

MLMs are a type of NLP model, which have a special token, usually [MASK], which is replaced with a word from the vocabulary. The model then predicts the word that was masked. For example, if the sentence is "The dog is [MASK] the cat", the model will predict "chasing" with high probability.

Labels

The concept of labels is best understood with an example.

Say we want to classify some Tweets as mean or not mean. If we have a list of Tweets and their corresponding label (mean or not mean), we can train a model to classify whether tweets are mean or not. Labels are generally just possibilities for the classification task.

Label Space

All of the possible labels for a given task ('mean' and 'not mean' for the above example).

Sentiment Analysis

Sentiment analysis is the task of classifying text into positive, negative, or other sentiments.

"Model" vs. "AI" vs. "LLM"

These terms are used somewhat interchangeably throughout this course, but they do not always mean the same thing. LLMs are a type of AI, as noted above, but not all AIs are LLMs. When we mentioned models in this course, we are referring to AI models. As such, in this course, you can consider the terms "model" and "AI" to be interchangeable.

Machine Learning (ML)

ML is a field of study that focuses on algorithms that can learn from data. ML is a subfield of AI.

Verbalizer

In the classification setting, verbalizers are mappings from labels to words in a language model's vocabulary. For example, consider performing sentiment classification with the following prompt:

Tweet: "I love hotpockets"
What is the sentiment of this tweet? Say 'pos' or 'neg'.

Here, the verbalizer is the mapping from the conceptual labels of positive and negative to the tokens pos and neg.

Reinforcement Learning from Human Feedback (RLHF)

RLHF is a method for fine tuning LLMs according to human preference data.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Branch, H. J., Cefalu, J. R., McHugh, J., Hujer, L., Bahl, A., del Castillo Iglesias, D., Heichman, R., & Darwishi, R. (2022). Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples.

  2. Schick, T., & Schütze, H. (2020). Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference. 2

  3. Brown, T. B. (2020). Language models are few-shot learners. arXiv Preprint arXiv:2005.14165. 2 3

  4. Wu, T., Terry, M., & Cai, C. J. (2022). Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–22.

  5. Schulhoff, S., Ilie, M., Balepur, N., Kahadze, K., Liu, A., Si, C., Li, Y., Gupta, A., Han, H., Schulhoff, S., & others. (2024). The Prompt Report: A Systematic Survey of Prompting Techniques. arXiv Preprint arXiv:2406.06608. 2 3 4 5 6

  6. Shin, T., Razeghi, Y., Logan IV, R. L., Wallace, E., & Singh, S. (2020). Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv Preprint arXiv:2010.15980.

  7. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners.

  8. Yasunaga, M., Chen, X., Li, Y., Pasupat, P., Leskovec, J., Liang, P., Chi, E. H., & Zhou, D. (2023). Large language models as analogical reasoners. arXiv Preprint arXiv:2310.01714.

  9. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., & others. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9.

  10. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models.

  11. Yew Ken Chia. (2023). Contrastive Chain-of-Thought Prompting. In arXiv preprint arXiv:1907.11692. 2

  12. Tushar Khot. (2023). Decomposed Prompting: A Modular Approach for Solving Complex Tasks.

  13. Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large language models understand and can be enhanced by emotional stimuli. arXiv Preprint arXiv:2307.11760.

  14. Fu, Y., Peng, H., Sabharwal, A., Clark, P., & Khot, T. (2022). Complexity-based prompting for multi-step reasoning. The Eleventh International Conference on Learning Representations.

  15. Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., & Chi, E. (2022). Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.

  16. Lei Wang. (2023). Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models.

  17. Zheng, M., Pei, J., & Jurgens, D. (2023). Is “A Helpful Assistant” the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts. https://arxiv.org/abs/2311.10054

  18. Zheng, H. S., Mishra, S., Chen, X., Cheng, H.-T., Chi, E. H., Le, Q. V., & Zhou, D. (2023). Take a step back: Evoking reasoning via abstraction in large language models. arXiv Preprint arXiv:2310.06117.

  19. Lu, A., Zhang, H., Zhang, Y., Wang, X., & Yang, D. (2023). Bounding the capabilities of large language models in open text generation with prompt constraints. arXiv Preprint arXiv:2302.09185.

  20. Zhou, Y., Geng, X., Shen, T., Tao, C., Long, G., Lou, J.-G., & Shen, J. (2023). Thread of thought unraveling chaotic contexts. arXiv Preprint arXiv:2311.08734.

  21. Liu, J., Liu, A., Lu, X., Welleck, S., West, P., Bras, R. L., Choi, Y., & Hajishirzi, H. (2021). Generated Knowledge Prompting for Commonsense Reasoning.

  22. Fei-Fei, L., Fergus, R., & Perona, P. (2006). One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4), 594–611.

  23. Wang, Y., Yao, Q., Kwok, J. T., & Ni, L. M. (2020). Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys (Csur), 53(3), 1–34.

  24. Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., & Neubig, G. (2023). Pal: Program-aided language models. International Conference on Machine Learning, 10764–10799.

  25. Schmidt, D. C., Spencer-Smith, J., Fu, Q., & White, J. (2023). Cataloging prompt patterns to enhance the discipline of prompt engineering. URL: Https://Www. Dre. Vanderbilt. Edu/Undefined̃ Schmidt/PDF/ADA_Europe_Position_Paper. Pdf [Accessed 2023-09-25].

  26. Wang, Z., Mao, S., Wu, W., Ge, T., Wei, F., & Ji, H. (2024). Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. https://arxiv.org/abs/2307.05300

  27. Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., & Zhou, D. (2022). Self-Consistency Improves Chain of Thought Reasoning in Language Models.

  28. Liu, J., Shen, D., Zhang, Y., Dolan, B., Carin, L., & Chen, W. (2022). What Makes Good In-Context Examples for GPT-3? Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. https://doi.org/10.18653/v1/2022.deelio-1.10

Copyright © 2024 Learn Prompting.