Compound Instruction Attack
A Compound Instruction Attack is a prompt injection technique that combines two or more instructions to manipulate an AI system's response. These attacks are particularly effective because they can create complex interactions between different instructions that may bypass the AI's safety measures.
How It Works
Compound Instruction Attacks work by:
- Combining multiple simple instructions
- Creating potential conflicts in instruction processing
- Exploiting the AI's tendency to try to satisfy all given commands
Example
Here's a simple example of a compound instruction attack:

Prompt
This attack combines two instructions:
- Output a specific phrase
- Modify the output format (remove the period)
Why They Work
Compound Instruction Attacks are effective because:
- They create ambiguity in instruction processing
- They can overwhelm the AI's decision-making process
- They may exploit conflicts between different safety measures
Footnotes
-
Schulhoff, S., Pinto, J., Khan, A., Bouchard, L.-F., Si, C., Anati, S., Tagliabue, V., Kost, A. L., Carnahan, C., & Boyd-Graber, J. (2023). Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition. arXiv Preprint arXiv:2311.16119. β©
Valeriia Kuka
Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.