Guardrails
Generative AI guardrails refer to a variety of different safety measures used to secure AI systems. Most commonly, guardrails are separate LLMs trained to detect malicious inputs or outputs, but guardrails can also be as simple as a keyword blacklist or an improved system prompt.
Sander Schulhoff
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.