Announcing our new Course: AI Red-Teaming and AI Safety Masterclass
Check it out →Modular Reasoning, Knowledge, and Language or MRKL Systems1 (pronounced "miracle") are a neuro-symbolic architecture that combines Large Language Models (neural computation) with external tools like calculators (symbolic computation) to solve complex problems.
A MRKL system consists of a set of modules (e.g., a calculator, weather API, database) and a router that decides how to 'route' incoming natural language queries to the appropriate module.
A simple example of a MRKL system is an LLM that can use a calculator app. This is a single-module system, where the LLM acts as the router. When asked, "What is 100 * 100?", the LLM extracts the numbers from the prompt and directs the MRKL system to use the calculator to compute the result, like this:
What is 100*100?
CALCULATOR[100*100]
The MRKL system identifies the CALCULATOR
command and inputs 100*100
into the calculator app.
This simple idea can easily be expanded to various symbolic computing tools. This basic idea can be expanded to many other symbolic computing tools.
Consider the following additional examples of applications:
What is the price of Apple stock right now?
The current price is DATABASE[SELECT price FROM stock WHERE company = "Apple" AND time = "now"].
What is the weather like in New York?
The weather is WEATHER_API[New York].
MRKL systems can handle even more complex tasks involving multiple data sources. Below is an example from AI21, showcasing how various modules can work together:
My step-by-step process:
I have reproduced an example MRKL System from the original paper, using Dust.tt.
The system reads a math problem (e.g. What is 20 times 5^6?
), extracts the numbers and the operations,
and reformats them for a calculator app (e.g. 20*5^6
). It then sends the reformatted equation
to Google's calculator app, and returns the result. Note that the original paper performs prompt tuning on the router (the LLM), but I do not in this example. Let's walk through how this works:
First, I made a simple dataset in the Dust Datasets
tab.
Then, I switched to the Specification
tab and loaded the dataset using an input
block.
Next, I created a llm
block that extracts the numbers and operations. Notice how
in the prompt I told it we would be using Google's calculator. The model I use (GPT-3)
likely has some knowledge of Google's calculator from pretraining.
Then, I made a code
block, which runs some simple javascript code to remove
spaces from the completion.
Finally, I made a search
block that sends the reformatted equation to Google's calculator.
Here are the final results, which were all correct!
Feel free to clone and experiment with this MRKL playground.
MRKL was developed by AI21 and originally used their Jurassic-1 (J-1) LLM2.
See this MRKL System built with LangChain.
Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., Shoham, Y., Bata, H., Levine, Y., Leyton-Brown, K., Muhlgay, D., Rozen, N., Schwartz, E., Shachaf, G., Shalev-Shwartz, S., Shashua, A., & Tenenholtz, M. (2022). ↩
Lieber, O., Sharir, O., Lentz, B., & Shoham, Y. (2021). Jurassic-1: Technical Details and Evaluation, White paper, AI21 Labs, 2021. URL: Https://Uploads-Ssl. Webflow. Com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_ Tech_paper. Pdf. ↩