Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
🤖 Agents
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔁 Language Model Inversion
🔨 Tooling
💪 Prompt Tuning
🗂️ RAG
🔧 Models
🎲 Miscellaneous
📙 Vocabulary Resource
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
🧠 AdvancedFew-Shot🟢 Chain-of-Dictionary (CoD)

🟢 Chain-of-Dictionary (CoD)

🟢 This article is rated easy
Reading Time: 4 minutes
Last updated on March 11, 2025

Valeriia Kuka

Large language models (LLMs) are capable of high-quality machine translation without task-specific training. In a standard translation prompt, an LLM is simply instructed to translate text from one language to another, for example:

Astronaut

Standard Translation Prompt


Translate the following sentence from French to English:
[Input sentence in French]

Despite extensive multilingual training, LLMs can struggle with rare or low-frequency words—especially in low-resource language scenarios. To address this, the Chain-of-Dictionary (CoD) prompting technique incorporates external multilingual dictionaries into the translation process. This method enriches the translation prompt with explicit lexical cues, thereby bridging gaps in the model's internal knowledge.

What is Chain-of-Dictionary (CoD) Prompting?

Chain-of-Dictionary (CoD) is a novel technique designed to improve multilingual machine translation (MNMT) by adding chained multilingual dictionary entries to the prompt. Rather than relying solely on the model's internal representations, CoD augments the translation task with explicit translations of key words in several auxiliary languages. For example, a CoD prompt might look like this:

Astronaut

Template


Translation Prompt:


Translate the following text from [source-language] into [target-language]: [source-sentence]


Chained Multilingual Dictionaries:


[word X in source-language] means [word X in target-language] means [word X in auxiliary-language 1] means [word X in auxiliary-language 2]

By including this chained lexical information, the model receives additional context that narrows the translation space, leading to improved handling of rare or ambiguous terms.

How CoD Works

  1. Standard Translation Prompt: The prompt begins with a simple translation instruction, for example:

    Astronaut

    Standard Translation Prompt


    Translate the following text from English into Tamil:


    "We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.

    This is the baseline instruction that tells the LLM what translation task to perform. It is similar to how you might normally ask a translator to convert text from one language to another.

  2. Multilingual Dictionary Chain: Before the translation task, CoD introduces a dictionary-based lexical hint in multiple languages. For example:

    Astronaut

    Multilingual Dictionary Chain


    "mice" means "எலி" (Tamil) means "Maus" (German) means "souris" (French).


    "non-diabetic" means "சர்க்கரைநோயற்ற" (Tamil) means "nicht-diabetisch" (German) means "non diabétique" (French).

    This section provides the model with multiple translations for key words. The "chain" connects the source word with its translations in several auxiliary languages, offering richer context than a simple bilingual dictionary.

  3. CoD Prompt:

    Astronaut

    CoD Prompt


    Translate the following text from English into Tamil: "We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.


    "mice" means "எலி" (Tamil) means "Maus" (German) means "souris" (French).


    "non-diabetic" means "சர்க்கரைநோயற்ற" (Tamil) means "nicht-diabetisch" (German) means "non diabétique" (French).

    The LLM incorporates the multilingual dictionary hints while generating the translation, leading to more accurate and natural results.

By having this chained information upfront, the model can better disambiguate word meanings and choose more appropriate target language equivalents. This is particularly effective for words that are rare or ambiguous in the target language.

Comparison with Existing Techniques

Unlike standard prompting or few-shot in-context learning (ICL), CoD uses chained multilingual hints. The following table summarizes the differences:

TechniqueDescriptionStrengthsWeaknesses
Standard PromptingDirect translation prompt without extra guidance.Simple and fast.Struggles with rare words in low-resource languages.
Few-shot In-Context Learning (ICL)Uses a few example translations in the prompt.Effective for high-resource languages.May not retrieve relevant examples for rare languages.
Bilingual Dictionary PromptingProvides source-target dictionary mappings.Improves translation quality.Lacks auxiliary language context.
CoD (Multilingual Dictionary Chaining)Augments the prompt with chained dictionary hints in multiple languages.Significantly improves translation quality, especially in low-resource scenarios.Slightly increases prompt length and computational cost.

How to Use CoD

  1. The LLM identifies key content words (nouns, adjectives, etc.) using the proposed prompt:

    Astronaut

    Prompt


    Extract the words from the following texts: [input-sentence]

  2. These key words are translated into several languages using off-the-shelf translation models (e.g., NLLB).

  3. The multiple translations are formatted into a chained structure. This chain is then prepended to the translation prompt, providing cross-lingual cues.

    Astronaut

    Template


    Translation Prompt:


    Translate the following text from [source-language] into [target-language]: [source-sentence]


    Chained Multilingual Dictionaries:


    [word X in source-language] means [word X in target-language] means [word X in auxiliary-language 1] means [word X in auxiliary-language 2]

  4. The LLM processes the enhanced prompt, incorporating the chained lexical hints to generate a more accurate translation—particularly for rare or ambiguous words.

Practical Applications

CoD prompting offers significant benefits in various scenarios:

  • Low-resource languages: Improves translation quality where training data is scarce.

  • Domain-specific translations: Improves accuracy for technical, medical, or legal documents by providing precise lexical context.

  • Multilingual AI assistants: Supports more accurate and context-aware translations across multiple languages.

Example of CoD in Action

Astronaut

CoD Prompt


"eighteen" means "kumi na nane" means "dix-huit" (French) means "achtzehn" (German).


"medals" means "medali" means "médailles" (French) means "Medaillen" (German).


"failed" means "wameshindwa" means "échoué" (French) means "gescheitert" (German).


Translate the following text from English into Swahili:
"With only eighteen medals available a day, a number of countries have failed to make the medal podium."

Robot

AI Output


Kwa kuwa medali kumi na nane zinapatikana kwa siku moja tu, nchi kadhaa zimeshindwa kufikia jukwaa la medali.

Here, the chained hints for "eighteen," "medals," and "failed" provide the necessary context, leading to a more precise and natural translation.

Conclusion

Chain-of-Dictionary prompting is a powerful enhancement to LLM-based machine translation. By integrating chained multilingual dictionaries into the translation prompt, CoD significantly improves accuracy—especially for low-resource languages—and can outperform traditional prompting methods as well as few-shot learning approaches. Its reliance on readily available dictionary resources makes it both practical and scalable for real-world translation applications.

Note

You can find the CoDe and resources here.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Lu, H., Yang, H., Huang, H., Zhang, D., Lam, W., & Wei, F. (2024). Chain-of-Dictionary Prompting Elicits Translation in Large Language Models. https://arxiv.org/abs/2305.06575 2