πŸ˜ƒ Basics
🧠 Advanced
Zero-Shot
🟒 Introduction
🟒 Emotion Prompting
🟒 Role Prompting
🟒 Re-reading (RE2)
🟒 Rephrase and Respond (RaR)
🟦 SimToM
β—† System 2 Attention (S2A)
Few-Shot
🟒 Introduction
🟒 Self-Ask
🟒 Self Generated In-Context Learning (SG-ICL)
🟒 Chain-of-Dictionary (CoD)
🟒 Cue-CoT
🟦 Chain of Knowledge (CoK)
β—† K-Nearest Neighbor (KNN)
β—†β—† Vote-K
β—†β—† Prompt Mining
Thought Generation
🟒 Introduction
🟒 Chain of Draft (CoD)
🟦 Contrastive Chain-of-Thought
🟦 Automatic Chain of Thought (Auto-CoT)
🟦 Tabular Chain-of-Thought (Tab-CoT)
🟦 Memory-of-Thought (MoT)
🟦 Active Prompting
🟦 Analogical Prompting
🟦 Complexity-Based Prompting
🟦 Step-Back Prompting
🟦 Thread of Thought (ThoT)
Ensembling
🟒 Introduction
🟒 Universal Self-Consistency
🟦 Mixture of Reasoning Experts (MoRE)
🟦 Max Mutual Information (MMI) Method
🟦 Prompt Paraphrasing
🟦 DiVeRSe (Diverse Verifier on Reasoning Step)
🟦 Universal Self-Adaptive Prompting (USP)
🟦 Consistency-based Self-adaptive Prompting (COSP)
🟦 Multi-Chain Reasoning (MCR)
Self-Criticism
🟒 Introduction
🟒 Self-Calibration
🟒 Chain of Density (CoD)
🟒 Chain-of-Verification (CoVe)
🟦 Self-Refine
🟦 Cumulative Reasoning
🟦 Reversing Chain-of-Thought (RCoT)
β—† Self-Verification
Decomposition
🟒 Introduction
🟒 Chain-of-Logic
🟦 Decomposed Prompting
🟦 Plan-and-Solve Prompting
🟦 Program of Thoughts
🟦 Tree of Thoughts
🟦 Chain of Code (CoC)
🟦 Duty-Distinct Chain-of-Thought (DDCoT)
β—† Faithful Chain-of-Thought
β—† Recursion of Thought
β—† Skeleton-of-Thought
πŸ”“ Prompt Hacking
🟒 Defensive Measures
🟒 Introduction
🟒 Filtering
🟒 Instruction Defense
🟒 Post-Prompting
🟒 Random Sequence Enclosure
🟒 Sandwich Defense
🟒 XML Tagging
🟒 Separate LLM Evaluation
🟒 Other Approaches
🟒 Offensive Measures
🟒 Introduction
🟒 Simple Instruction Attack
🟒 Context Ignoring Attack
🟒 Compound Instruction Attack
🟒 Special Case Attack
🟒 Few-Shot Attack
🟒 Refusal Suppression
🟒 Context Switching Attack
🟒 Obfuscation/Token Smuggling
🟒 Task Deflection Attack
🟒 Payload Splitting
🟒 Defined Dictionary Attack
🟒 Indirect Injection
🟒 Recursive Injection
🟒 Code Injection
🟒 Virtualization
🟒 Pretending
🟒 Alignment Hacking
🟒 Authorized User
🟒 DAN (Do Anything Now)
🟒 Bad Chain
πŸ”¨ Tooling
Prompt Engineering IDEs
🟒 Introduction
GPT-3 Playground
Dust
Soaked
Everyprompt
Prompt IDE
PromptTools
PromptSource
PromptChainer
Prompts.ai
Snorkel 🚧
Human Loop
Spellbook 🚧
Kolla Prompt 🚧
Lang Chain
OpenPrompt
OpenAI DALLE IDE
Dream Studio
Patience
Promptmetheus
PromptSandbox.io
The Forge AI
AnySolve
Conclusion
πŸ–ΌοΈ Image Prompting🟒 Fix Deformed Generations

Fix Deformed Generations

🟒 This article is rated easy
Reading Time: 2 minutes
Last updated on August 7, 2024

Sander Schulhoff

Takeaways
  • Prevent deformities by using negative prompts (such as negatively weighted terms) to de-emphasize them.

What are Deformed Generations?

Deformed generations, particularly on human body parts (e.g. hands, feet), are a common issue with many models. This can be dealt with to some extent with good negative prompts. The following example is adapted from this Reddit post.

An Example of Fixing Deformed Generations

Using Stable Diffusion v1.5 and the following prompt, we generate a nice image of Brad Pitt, except for his hands of course!

Astronaut

Prompt


studio medium portrait of Brad Pitt waving his hands, detailed, film, studio lighting, 90mm lens, by Martin Schoeller:6

Using a robust negative prompt, we can generate much more convincing hands.

Astronaut

Prompt


studio medium portrait of Brad Pitt waving his hands, detailed, film, studio lighting, 90mm lens, by Martin Schoeller:6 | disfigured, deformed hands, blurry, grainy, broken, cross-eyed, undead, photoshopped, overexposed, underexposed, low-res, bad anatomy, bad hands, extra digits, fewer digits, bad digit, bad ears, bad eyes, bad face, cropped: -5

Using a similar negative prompt can help with other body parts as well. Unfortunately, this technique is not consistent, so you may need to attempt multiple generations before getting a good result. In the future, this type of prompting should be unnecessary since models will improve. However, currently, it is a very useful technique.

Conclusion

AI models are prone to producing deformed generations, especially when it comes to human body parts. Fortunately, we can include negative prompts like the one in the example above to fix these bad outputs.

FAQ

Why might I want to include negative prompts in my generative AI inputs?

Using negative prompts with techniques such as weighted terms can improve image generations, explicitly instructing the AI to de-emphasize deformed aspects.

Will fixing deformed generations always be necessary?

It's useful to note that as generative models get better, it will likely be less necessary to use the techniques described in this article.

Notes

Improved models such as Protogen are often better with hands, feet, etc.

Footnotes

  1. Blake. (2022). With the right prompt, Stable Diffusion 2.0 can do hands. https://www.reddit.com/r/StableDiffusion/comments/z7salo/with_the_right_prompt_stable_diffusion_20_can_do/ ↩

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.