Compete in HackAPrompt 2.0, the world's largest AI Red-Teaming competition!

Check it out →

Guardrails

Last updated on February 22, 2025

Generative AI guardrails refer to a variety of different safety measures used to secure AI systems. Most commonly, guardrails are separate LLMs trained to detect malicious inputs or outputs, but guardrails can also be as simple as a keyword blacklist or an improved system prompt.


© 2025 Learn Prompting. All rights reserved.