Announcing our new Course: AI Red-Teaming and AI Safety Masterclass
Check it out →Please refer to this page for a list of terms and concepts that we will use throughout this course.
These terms all refer more or less to the same thing: large AIs (neural networks), which have usually been trained on a huge amount of text.
MLMs are a type of NLP model, which have a special token, usually [MASK]
, which is
replaced with a word from the vocabulary. The model then predicts the word that
was masked. For example, if the sentence is "The dog is [MASK] the cat", the model
will predict "chasing" with high probability.
The concept of labels is best understood with an example.
Say we want to classify some Tweets as mean or not mean. If we have a list of Tweets and their corresponding label (mean or not mean), we can train a model to classify whether tweets are mean or not. Labels are generally just possibilities for the classification task.
All of the possible labels for a given task ('mean' and 'not mean' for the above example).
Sentiment analysis is the task of classifying text into positive, negative, or other sentiments.
These terms are used somewhat interchangeably throughout this course, but they do not always mean the same thing. LLMs are a type of AI, as noted above, but not all AIs are LLMs. When we mentioned models in this course, we are referring to AI models. As such, in this course, you can consider the terms "model" and "AI" to be interchangeable.
ML is a field of study that focuses on algorithms that can learn from data. ML is a subfield of AI.
In the classification setting, verbalizers are mappings from labels to words in a language model's vocabulary2. For example, consider performing sentiment classification with the following prompt:
Tweet: "I love hotpockets"
What is the sentiment of this tweet? Say 'pos' or 'neg'.
Here, the verbalizer is the mapping from the conceptual labels of positive
and negative
to the tokens pos
and neg
.
RLHF is a method for fine tuning LLMs according to human preference data.
A text or other input to a Generative AI.
.
Standard prompts that have exemplars in them. Exemplars are examples of the task that the prompt is trying to solve, which are included in the prompt itself.
The main idea of CoT is that by showing the LLM some few shot exemplars where the reasoning process is explained in the exemplars, the LLM will also show the reasoning process when answering the prompt.
Generating multiple chains of thought and taking the majority answer.
Pre-training is the initial process of training a neural network on a large amount of data before later 'fine-tuning'.
A function that converts a vector of numbers into a probability distribution.
All of the possible labels for a given task.
The correct labels for a given task.
The concept of labels is best understood with an example.
Say we want to classify some Tweets as mean or not mean. If we have a list of Tweets and their corresponding *label* (mean or not mean), we can train a model to classify whether tweets are mean or not. Labels are generally just possibilities for the classification task.
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or mathematical models.
MLMs are a type of NLP model, which have a special token, usually [MASK], which is replaced with a word from the vocabulary. The model then predicts the word that was masked. For example, if the sentence is 'The dog is [MASK] the cat', the model will predict 'chasing' with high probability.
ML is a field of study that focuses on algorithms that can learn from data. ML is a subfield of AI.
RLHF is a method for fine tuning LLMs according to human preference data.
Reinforcement learning is a subfield of machine learning where agents learn to make decisions by interacting with a virtual environment.
Application Programming Interface. Enables different systems to interact with each other programmatically. Two types of APIs are REST APIs (web APIs) and native-library APIs.
Examples of the task that the prompt is trying to solve, which are included in the prompt itself.
A LLM (Large Language Model) is a model that is trained on language.
Sentiment analysis is the task of classifying text into positive, negative, or other sentiments.
A Large Language Model (LLM) developed by OpenAI as a part of the GPT-3.5 series.
A Large Language Model (LLM) developed by OpenAI as a part of the GPT-3.5 series.
The amount of tokens a model can process at once.
Branch, H. J., Cefalu, J. R., McHugh, J., Hujer, L., Bahl, A., del Castillo Iglesias, D., Heichman, R., & Darwishi, R. (2022). Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples. ↩
Schick, T., & Schütze, H. (2020). Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference. ↩