Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ“ Language Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits

LLMs Using Tools

🟦 This article is rated medium
Reading Time: 3 minutes

Last updated on August 7, 2024

Takeaways
  • MRKL Systems: MRKL Systems route natural language queries to the approriate external module, turning them into API calls and database queries.
  • Complex Integration: More sophisticated MRKL systems combine multiple data sources to perform intricate operations, showcasing their versatility.

What is a MRKL System?

Modular Reasoning, Knowledge, and Language or MRKL Systems (pronounced "miracle") are a neuro-symbolic architecture that combines Large Language Models (neural computation) with external tools like calculators (symbolic computation) to solve complex problems.

A MRKL system consists of a set of modules (e.g., a calculator, weather API, database) and a router that decides how to 'route' incoming natural language queries to the appropriate module.

A simple example of a MRKL system is an LLM that can use a calculator app. This is a single-module system, where the LLM acts as the router. When asked, "What is 100 * 100?", the LLM extracts the numbers from the prompt and directs the MRKL system to use the calculator to compute the result, like this:

Astronaut

Prompt


What is 100*100?

Robot

AI Output


CALCULATOR[100*100]

The MRKL system identifies the CALCULATOR command and inputs 100*100 into the calculator app. This simple idea can easily be expanded to various symbolic computing tools. This basic idea can be expanded to many other symbolic computing tools.

Examples of MRKL Systems

Consider the following additional examples of applications:

  • Financial Database Queries: A chatbot can extract information from a user's text and form a SQL query to fetch the latest stock prices.
Astronaut

Prompt


What is the price of Apple stock right now?

Robot

AI Output


The current price is DATABASE[SELECT price FROM stock WHERE company = "Apple" AND time = "now"].

  • Weather Queries: A chatbot can extract location data from a prompt and use a weather API to provide the current forecast.
Astronaut

Prompt


What is the weather like in New York?

Robot

AI Output


The weather is WEATHER_API[New York].

Complex MRKL Systems

MRKL systems can handle even more complex tasks involving multiple data sources. Below is an example from AI21, showcasing how various modules can work together:

Example MRKL System (AI21)

An Example MRKL System in Action

My step-by-step process:

I have reproduced an example MRKL System from the original paper, using Dust.tt. The system reads a math problem (e.g. What is 20 times 5^6?), extracts the numbers and the operations, and reformats them for a calculator app (e.g. 20*5^6). It then sends the reformatted equation to Google's calculator app, and returns the result. Note that the original paper performs prompt tuning on the router (the LLM), but I do not in this example. Let's walk through how this works:

First, I made a simple dataset in the Dust Datasets tab.

Then, I switched to the Specification tab and loaded the dataset using an input block.

Next, I created a llm block that extracts the numbers and operations. Notice how in the prompt I told it we would be using Google's calculator. The model I use (GPT-3) likely has some knowledge of Google's calculator from pretraining.

Then, I made a code block, which runs some simple javascript code to remove spaces from the completion.

Finally, I made a search block that sends the reformatted equation to Google's calculator.

Here are the final results, which were all correct!

Feel free to clone and experiment with this MRKL playground.

Notes

MRKL was developed by AI21 and originally used their Jurassic-1 (J-1) LLM.

More

See this MRKL System built with LangChain.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., Shoham, Y., Bata, H., Levine, Y., Leyton-Brown, K., Muhlgay, D., Rozen, N., Schwartz, E., Shachaf, G., Shalev-Shwartz, S., Shashua, A., & Tenenholtz, M. (2022). ↩

  2. Lieber, O., Sharir, O., Lentz, B., & Shoham, Y. (2021). Jurassic-1: Technical Details and Evaluation, White paper, AI21 Labs, 2021. URL: Https://Uploads-Ssl. Webflow. Com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_ Tech_paper. Pdf. ↩

Copyright Β© 2024 Learn Prompting.