Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →
💪 プロンプトチューニング

📙 Vocabulary Reference

Last updated on August 7, 2024 by Sander Schulhoff

Please refer to this page for a list of terms and concepts that we will use throughout this course.

Large Language Models (LLMs), Pretrained Language Models (PLMs)1, Language Models (LMs), and foundation models

These terms all refer more or less to the same thing: large AIs (neural networks), which have usually been trained on a huge amount of text.

Masked Language Models (MLMs)

MLMs are a type of NLP model, which have a special token, usually [MASK], which is replaced with a word from the vocabulary. The model then predicts the word that was masked. For example, if the sentence is "The dog is [MASK] the cat", the model will predict "chasing" with high probability.

Labels

The concept of labels is best understood with an example.

Say we want to classify some Tweets as mean or not mean. If we have a list of Tweets and their corresponding label (mean or not mean), we can train a model to classify whether tweets are mean or not. Labels are generally just possibilities for the classification task.

Label Space

All of the possible labels for a given task ('mean' and 'not mean' for the above example).

Sentiment Analysis

Sentiment analysis is the task of classifying text into positive, negative, or other sentiments.

"Model" vs. "AI" vs. "LLM"

These terms are used somewhat interchangeably throughout this course, but they do not always mean the same thing. LLMs are a type of AI, as noted above, but not all AIs are LLMs. When we mentioned models in this course, we are referring to AI models. As such, in this course, you can consider the terms "model" and "AI" to be interchangeable.

Machine Learning (ML)

ML is a field of study that focuses on algorithms that can learn from data. ML is a subfield of AI.

Verbalizer

In the classification setting, verbalizers are mappings from labels to words in a language model's vocabulary2. For example, consider performing sentiment classification with the following prompt:

Tweet: "I love hotpockets"
What is the sentiment of this tweet? Say 'pos' or 'neg'.

Here, the verbalizer is the mapping from the conceptual labels of positive and negative to the tokens pos and neg.

Reinforcement Learning from Human Feedback (RLHF)

RLHF is a method for fine tuning LLMs according to human preference data.

Prompts

Prompt

A text or other input to a Generative AI.


Prompt Structure

.


Few-Shot Standard Prompt

Standard prompts that have exemplars in them. Exemplars are examples of the task that the prompt is trying to solve, which are included in the prompt itself.


Prompting Techniques

CoT prompting

The main idea of CoT is that by showing the LLM some few shot exemplars where the reasoning process is explained in the exemplars, the LLM will also show the reasoning process when answering the prompt.


PAL

A method that uses code as intermediate reasoning

see PAL

Self-Consistency

Generating multiple chains of thought and taking the majority answer.


General ML

Pre-training

Pre-training is the initial process of training a neural network on a large amount of data before later 'fine-tuning'.


Softmax

A function that converts a vector of numbers into a probability distribution.


Label Space

All of the possible labels for a given task.


Gold Labels

The correct labels for a given task.


Labels

The concept of labels is best understood with an example.

Say we want to classify some Tweets as mean or not mean. If we have a list of Tweets and their corresponding *label* (mean or not mean), we can train a model to classify whether tweets are mean or not. Labels are generally just possibilities for the classification task.


Neural Network

A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or mathematical models.


Masked Language Models (MLMs)

MLMs are a type of NLP model, which have a special token, usually [MASK], which is replaced with a word from the vocabulary. The model then predicts the word that was masked. For example, if the sentence is 'The dog is [MASK] the cat', the model will predict 'chasing' with high probability.


Machine Learning (ML)

ML is a field of study that focuses on algorithms that can learn from data. ML is a subfield of AI.


Reinforcement Learning from Human Feedback (RLHF)

RLHF is a method for fine tuning LLMs according to human preference data.


Reinforcement Learning

Reinforcement learning is a subfield of machine learning where agents learn to make decisions by interacting with a virtual environment.


API

Application Programming Interface. Enables different systems to interact with each other programmatically. Two types of APIs are REST APIs (web APIs) and native-library APIs.


Exemplars

Examples of the task that the prompt is trying to solve, which are included in the prompt itself.


LLM

A LLM (Large Language Model) is a model that is trained on language.


Sentiment Analysis

Sentiment analysis is the task of classifying text into positive, negative, or other sentiments.


text-davinci-003

A Large Language Model (LLM) developed by OpenAI as a part of the GPT-3.5 series.


text-davinci-002

A Large Language Model (LLM) developed by OpenAI as a part of the GPT-3.5 series.


Context Length

The amount of tokens a model can process at once.


Footnotes

  1. Branch, H. J., Cefalu, J. R., McHugh, J., Hujer, L., Bahl, A., del Castillo Iglesias, D., Heichman, R., & Darwishi, R. (2022). Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples.

  2. Schick, T., & Schütze, H. (2020). Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference.

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.