Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
πŸ”“ Prompt Hacking🟒 Defensive Measures🟒 Other Approaches

Other Approaches

🟒 This article is rated easy
Reading Time: 1 minute

Last updated on October 23, 2024

Takeaways
  • There are a number of other techniques that can be used to identify and defend against adversarial prompts.

What are other approaches to defense against prompt hacking?

Although the previous approaches can be very robust, a few other approaches, such as using a different model, including fine-tuning, soft prompting, and length restrictions, can also be effective.

Using a Different Model

More modern models such as GPT-4 are more robust against prompt injection. Additionally, non-instruction tuned models may be difficult to prompt inject.

Fine Tuning

Fine-tuning the model is a highly effective defense, since at inference time there is no prompt involved, except the user input. This is likely the preferable defense in any high-value situation since it is so robust. However, it requires a large amount of data and may be costly, which is why this defense is not frequently implemented.

Soft Prompting

Soft prompting might also be effective since it does not have a clearly defined discrete prompt (other than user input). Soft prompting effectively requires fine-tuning, so it has many of the same benefits, but it will likely be cheaper. However, soft prompting is not as well studied as fine-tuning, so it is unclear how effective it is.

Length Restrictions

Finally, including length restrictions on user input or limiting the length of chatbot conversations as Bing does can prevent some attacks such as huge DAN-style prompts or virtualization attacks respectively.

Conclusion

Using any of the methods in this article in addition to the techniques introduced in this subsection on defensive measures can ensure that your model prompts are robust against attempts at forcing harmful or biased outputs.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Goodside, R. (2022). GPT-3 Prompt Injection Defenses. https://twitter.com/goodside/status/1578278974526222336?s=20&t=3UMZB7ntYhwAk3QLpKMAbw ↩

  2. Selvi, J. (2022). Exploring Prompt Injection Attacks. https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/ ↩

Copyright Β© 2024 Learn Prompting.