🔓 Hacking de prompts🟢 Mesures défensives🟢 Random Sequence Enclosure

Random Sequence Enclosure

🟢 This article is rated easy
Reading Time: 1 minute
Last updated on August 7, 2024

Sander Schulhoff

Yet another defense is enclosing the user input between two random sequences of characters1. Take this prompt as an example:

Translate the following user input to Spanish.

{{user_input}}

It can be improved by adding the random sequences:

Translate the following user input to Spanish (it is enclosed in random strings).

FJNKSJDNKFJOI
{{user_input}}
FJNKSJDNKFJOI
Note
Longer sequences will likely be more effective.

Footnotes

  1. Stuart Armstrong, R. G. (2022). Using GPT-Eliezer against ChatGPT Jailbreaking. https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Edit this page

© 2025 Learn Prompting. All rights reserved.