Sometimes Gen AIs give incorrect outputs to prompts. For example, they are not very good at multiplying numbers, and will often confidently give you an incorrect answer. Try giving the following prompt to ChatGPT:
ChatGPT gave an incorrect answer (the correct answer is 91,377). We can modify our prompt, and tell ChatGPT to think through its steps.
To multiply 923 by 99, we can break it down into smaller steps:
Step 1: Write out the full number of 0s for each term: 923 * 99 = 923 * 990
Step 2: Multiply the first term by 9: 923 * 9 = 8307
Step 3: Multiply the first term by 90 (adding one 0): 923 * 90 = 83070
Step 4: Add the results from step 2 and step 3: 8307 + 83070 = 91377
Therefore, 923 multiplied by 99 is equal to 91,377.
Surprisingly, these additional instructions, ChatGPT is able to solve the problem. This worked because ChatGPT wrote out all of its work, and thus did not make mistakes. Gen AI's mathematical abilities can be improved by prompt engineering. Now, let's try a different example.
Since ChatGPT is non-deterministic, even this prompt sometimes won't work. You may see answers like 91,177 or 91,077.
This time, we will ask ChatGPT to write a marketing tweet about a new, fictional AI product we are thinking of creating: ArchaeologistAI. ArchaeologistAI tells stories about famous archaeologists.
This Tweet is not accurate, since ArchaeologistAI only tells stories, and does not discover new things. However, this is not ChatGPT's fault! It did not know anything about ArchaeologistAI. Let's include relevant information in the prompt.
This is much better! Now let's try to make ChatGPT write the Tweet in the style of Indiana Jones.
Alright, that may be the message we need to target archaeology fans! By testing multiple prompts, we can see which one gives the best output.
This process of refining our prompt over time is known as prompt engineering. You will never write the perfect prompt on your first try, so it is important to get good at refining your prompt. Being good at prompt engineering mostly comes from lots of practice (trial and error). The rest of the articles in this section will introduce you to different prompting strategies which you can use in your prompt engineering process.
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.