Announcing our new Paper: The Prompt Report, with Co-authors from OpenAI & Microsoft!
Check it out →Code injection1 is a prompt hacking exploit where the attacker is able to get the LLM to run arbitrary code (often Python). This can occur in tool-augmented LLMs, where the LLM is able to send code to an interpreter, but it can also occur when the LLM itself is used to evaluate code.
Code injection has reportedly been performed on an AI app, MathGPT and was used to obtain it's OpenAI API key (MITRE report).
MathGPT has since been secured against code injection. Please do not attempt to hack it; they pay for API calls.
Let's work with a simplified example of the MathGPT app. We will assume that it takes in a math problem and writes Python code to try to solve the problem.
Here is the prompt that the simplified example app uses:
Write Python code to solve the following math problem:
{{user_input}}
Let's hack it here:
Code injection is a sophisticated hacking technique that takes advantage of ChatGPT's ability to interpret Python code. Even with the simple example of shown in this article, it is clear that this exploit is significant and dangerous.
Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., & Hashimoto, T. (2023). Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks. ↩
Sign up and get the latest AI news, prompts, and tools.
Join 30,000+ readers from companies like OpenAI, Microsoft, Google, Meta and more!