πŸ˜ƒ Basics
🧠 Advanced
Zero-Shot
🟒 Introduction
🟒 Emotion Prompting
🟒 Role Prompting
🟒 Re-reading (RE2)
🟒 Rephrase and Respond (RaR)
🟦 SimToM
β—† System 2 Attention (S2A)
Few-Shot
🟒 Introduction
🟒 Self-Ask
🟒 Self Generated In-Context Learning (SG-ICL)
🟒 Chain-of-Dictionary (CoD)
🟒 Cue-CoT
🟦 Chain of Knowledge (CoK)
β—† K-Nearest Neighbor (KNN)
β—†β—† Vote-K
β—†β—† Prompt Mining
Thought Generation
🟒 Introduction
🟒 Chain of Draft (CoD)
🟦 Contrastive Chain-of-Thought
🟦 Automatic Chain of Thought (Auto-CoT)
🟦 Tabular Chain-of-Thought (Tab-CoT)
🟦 Memory-of-Thought (MoT)
🟦 Active Prompting
🟦 Analogical Prompting
🟦 Complexity-Based Prompting
🟦 Step-Back Prompting
🟦 Thread of Thought (ThoT)
Ensembling
🟒 Introduction
🟒 Universal Self-Consistency
🟦 Mixture of Reasoning Experts (MoRE)
🟦 Max Mutual Information (MMI) Method
🟦 Prompt Paraphrasing
🟦 DiVeRSe (Diverse Verifier on Reasoning Step)
🟦 Universal Self-Adaptive Prompting (USP)
🟦 Consistency-based Self-adaptive Prompting (COSP)
🟦 Multi-Chain Reasoning (MCR)
Self-Criticism
🟒 Introduction
🟒 Self-Calibration
🟒 Chain of Density (CoD)
🟒 Chain-of-Verification (CoVe)
🟦 Self-Refine
🟦 Cumulative Reasoning
🟦 Reversing Chain-of-Thought (RCoT)
β—† Self-Verification
Decomposition
🟒 Introduction
🟒 Chain-of-Logic
🟦 Decomposed Prompting
🟦 Plan-and-Solve Prompting
🟦 Program of Thoughts
🟦 Tree of Thoughts
🟦 Chain of Code (CoC)
🟦 Duty-Distinct Chain-of-Thought (DDCoT)
β—† Faithful Chain-of-Thought
β—† Recursion of Thought
β—† Skeleton-of-Thought
πŸ”“ Prompt Hacking
🟒 Defensive Measures
🟒 Introduction
🟒 Filtering
🟒 Instruction Defense
🟒 Post-Prompting
🟒 Random Sequence Enclosure
🟒 Sandwich Defense
🟒 XML Tagging
🟒 Separate LLM Evaluation
🟒 Other Approaches
🟒 Offensive Measures
🟒 Introduction
🟒 Simple Instruction Attack
🟒 Context Ignoring Attack
🟒 Compound Instruction Attack
🟒 Special Case Attack
🟒 Few-Shot Attack
🟒 Refusal Suppression
🟒 Context Switching Attack
🟒 Obfuscation/Token Smuggling
🟒 Task Deflection Attack
🟒 Payload Splitting
🟒 Defined Dictionary Attack
🟒 Indirect Injection
🟒 Recursive Injection
🟒 Code Injection
🟒 Virtualization
🟒 Pretending
🟒 Alignment Hacking
🟒 Authorized User
🟒 DAN (Do Anything Now)
🟒 Bad Chain
πŸ”¨ Tooling
Prompt Engineering IDEs
🟒 Introduction
GPT-3 Playground
Dust
Soaked
Everyprompt
Prompt IDE
PromptTools
PromptSource
PromptChainer
Prompts.ai
Snorkel 🚧
Human Loop
Spellbook 🚧
Kolla Prompt 🚧
Lang Chain
OpenPrompt
OpenAI DALLE IDE
Dream Studio
Patience
Promptmetheus
PromptSandbox.io
The Forge AI
AnySolve
Conclusion
🧠 AdvancedFew-Shot🟦 Chain of Knowledge (CoK)

🟦 Chain of Knowledge (CoK)

🟦 This article is rated medium
Reading Time: 2 minutes
Last updated on March 25, 2025

Valeriia Kuka

Chain of Knowledge (CoK) addresses limitations in existing prompting techniques, particularly the hallucination issues common in Chain-of-Thought (CoT) prompting. CoK implements a structured approach to knowledge representation and verification through two main components: evidence triples and explanation hints.

Core Components

Evidence Triples (CoK-ET)

Evidence triples are structured knowledge representations in the format (subject, relation, object). These triples provide verifiable atomic facts that serve as the foundation for reasoning.

Example triples:

  • (water, boilingPoint, 100Β°C)
  • (mammals, class, vertebrates)
  • (photosynthesis, requires, sunlight)

Explanation Hints (CoK-EH)

Explanation hints provide logical connections between evidence triples to construct valid reasoning chains. They explicitly state how the presented evidence supports the final conclusion.

The CoK Prompting Process

Construct In-Context Exemplars

Choose one or several labeled examples (exemplars) that include:

  • The original query.
  • Annotated evidence triples (either manually created or retrieved from a knowledge base).
  • Explanation hints that tie the triples to the answer.

Here's an example you can use in one-shot CoK prompting:

Astronaut

Example


Question: Determine if a plant can grow in a windowless room.


Evidence Triples:

  1. (plants, require, photosynthesis)
  2. (photosynthesis, requires, sunlight)
  3. (windowless room, lacks, sunlight)

Explanation:


Plants require photosynthesis for growth, which needs sunlight. A windowless room lacks sunlight.


Answer: No, a plant cannot grow in a windowless room due to the absence of sunlight required for photosynthesis.

Generating a Response

The final prompt is constructed by concatenating the in-context exemplars with the test query. This "chain-of-knowledge prompt" guides the model to produce:

  • A set of evidence triples.
  • Corresponding explanation hints.
  • The final answer.
Astronaut

Template


Question: [Input question]


Evidence:

  1. (subject, relation, object)
  2. (subject, relation, object)

...


Explanation: [Logical connection between evidence]


Answer: [Conclusion]


[Your question]

Limitations of CoK

  1. Requires accurate knowledge base

  2. Additional processing for triple generation

  3. Implementation complexity like structured format maintenance

Applications

CoK is particularly effective in:

  • Technical documentation
  • Scientific reasoning
  • Fact-based analysis
  • Educational content verification

Conclusion

Chain of Knowledge provides a structured methodology for enhancing language model reasoning through explicit knowledge representation and verification. While it requires more computational resources than simpler prompting methods, it offers improved reliability and verifiability in knowledge-intensive applications.

Footnotes

  1. Wang, J., Sun, Q., Li, X., & Gao, M. (2024). Boosting Language Models Reasoning with Chain-of-Knowledge Prompting. https://arxiv.org/abs/2306.06427 ↩

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.