프롬프트 엔지니어링 가이드
😃 기초
💼 기본 애플리케이션
🧙‍♂️ 중급
🤖 자치령 대표
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🎲 Miscellaneous
Models
📙 Vocabulary Reference
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
🔓 Prompt Hacking🟢 Offensive Measures🟢 Code Injection

Code Injection

🟢 This article is rated easy
Reading Time: 1 minute
Last updated on August 7, 2024

Sander Schulhoff

Code injection is a prompt hacking exploit where the attacker is able to get the LLM to run arbitrary code (often Python). This can occur in tool-augmented LLMs, where the LLM is able to send code to an interpreter, but it can also occur when the LLM itself is used to evaluate code.

Code injection has reportedly been performed on an AI app, MathGPT and was used to obtain it's OpenAI API key (MITRE report).

Note

MathGPT has since been secured against code injection. Please do not attempt to hack it; they pay for API calls.

Example

Let's work with a simplified example of the MathGPT app. We will assume that it takes in a math problem and writes Python code to try to solve the problem.

Here is the prompt that the simplified example app uses:

Write Python code to solve the following math problem:
{{user_input}}

Let's hack it here:

This is a simple example, but it shows that this type of exploit is significant and dangerous.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., & Hashimoto, T. (2023). Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks.