Yet another defense is enclosing the user input between two random sequences of characters. Take this prompt as an example:
Translate the following user input to Spanish.
{{user_input}}
It can be improved by adding the random sequences:
Translate the following user input to Spanish (it is enclosed in random strings).
FJNKSJDNKFJOI
{{user_input}}
FJNKSJDNKFJOI
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.
Stuart Armstrong, R. G. (2022). Using GPT-Eliezer against ChatGPT Jailbreaking. https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking β©