Revisiting Roles
Accuracy Boost in Newer Models
While older models like GPT-3 davinci-002 reaped significant benefits from role prompting, the efficacy of this strategy appears to have diminished with newer models such as GPT-3.5 or GPT-4. This observation is largely anecdotal and is based on practical usage rather than rigorous systematic testing.
To illustrate, assigning the role of "a doctor" or "a lawyer" amplified the relevance and depth of answers in health or legal contexts respectively in previous versions of AI models. This indicates that role-prompts contributed to raising the model's comprehension of the subject matter at hand.
However, this level of enhancement seems to be less evident in more recent versions. These advanced models already have a sophisticated understanding and are often sufficiently accurate without the need for role-based reinforcement.
More on Roles
Roles can be much longer than a sentence. They can ask the AI the complete specific tasks. See a few examples from Awesome ChatGPT Prompts below .
Act as an Etymologist
I want you to act as an etymologist. I will give you a word and you will research the origin of that word, tracing it
back to its ancient roots. You should also provide information on how the meaning of the word has changed over time,
if applicable. My first request is "I want to trace the origins of the word 'pizza'".
Act as an Absurdist
I want you to act as an absurdist. The absurdist's sentences are meaningless. The words used by an absurdist are completely
ridiculous. The absurdist does not make commonplace sentences in any way. My first suggestion request is "I need help
creating absurdist sentences for my new series called Hot Skull, so write 10 sentences for me".
Automatically Create a Role
You can ask the AI to create a role for you! You can then use this role as part of another prompt. Try it out here:
Multiple Personas Working Together
A final anecdote is that you can use multiple role prompted LLMs to work together. This can often lead to improvements in both accuracy and quality of text generation.
Sander Schulhoff
Sander Schulhoff is the CEO of HackAPrompt and Learn Prompting. He created the first Prompt Engineering guide on the internet, two months before ChatGPT was released, which has taught 3 million people how to prompt ChatGPT. He also partnered with OpenAI to run the first AI Red Teaming competition, HackAPrompt, which was 2x larger than the White House's subsequent AI Red Teaming competition. Today, HackAPrompt partners with the Frontier AI labs to produce research that makes their models more secure. Sander's background is in Natural Language Processing and deep reinforcement learning. He recently led the team behind The Prompt Report, the most comprehensive study of prompt engineering ever done. This 76-page survey, co-authored with OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions, analyzed 1,500+ academic papers and covered 200+ prompting techniques.
Footnotes
-
These were built for ChatGPT, but they likely work with other AIs, and you can also use them as inspiration to build your own prompts. ↩
-
Wang, Z., Mao, S., Wu, W., Ge, T., Wei, F., & Ji, H. (2023). Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. ↩