πŸ”“ Prompt Hacking
🟒 Defensive Measures
🟒 Introduction
🟒 Filtering
🟒 Instruction Defense
🟒 Post-Prompting
🟒 Random Sequence Enclosure
🟒 Sandwich Defense
🟒 XML Tagging
🟒 Separate LLM Evaluation
🟒 Other Approaches
🟒 Offensive Measures
🟒 Introduction
🟒 Obfuscation/Token Smuggling
🟒 Payload Splitting
🟒 Defined Dictionary Attack
🟒 Virtualization
🟒 Indirect Injection
🟒 Recursive Injection
🟒 Code Injection
πŸ”¨ Tooling
Prompt Engineering IDEs
🟒 Introduction
GPT-3 Playground
Dust
Soaked
Everyprompt
Prompt IDE
PromptTools
PromptSource
PromptChainer
Prompts.ai
Snorkel 🚧
Human Loop
Spellbook 🚧
Kolla Prompt 🚧
Lang Chain
OpenPrompt
OpenAI DALLE IDE
Dream Studio
Patience
Promptmetheus
PromptSandbox.io
The Forge AI
AnySolve
Conclusion
πŸ”¨ ToolingPrompt Engineering IDEsHuman Loop

Human Loop

Reading Time: 1 minute
Last updated on August 7, 2024

Sander Schulhoff

Human loop appears to offer a playground similar to the GPT-3 Playground, but with some added features. They are currently working with more industry users.

Features

  1. Provides a simple SDK to log the GPT-3 requests and user feedback.
  2. Provides feedback on the user inputs and discovers issues user is missing.
  3. Users will be able to log explicit and implicit signals through SDK.
  4. Users will be easily A/B test models and prompts with the improvement engine built for GPT-3.
  5. Users will be able to Compare prompts on different models and find the best model and reduce cost.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.