Contrastive Chain-of-Thought
Contrastive Chain-of-Thought or Contrastive CoT involves adding incorrect explanations to the CoT prompt alongside the correct reasoning in order to show the LLM how not to reason. This method has shown significant improvement over CoT in areas like arithmetic reasoning and factual question answering.
Valeriia Kuka
Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.
Footnotes
-
Yew Ken Chia. (2023). Contrastive Chain-of-Thought Prompting. In arXiv preprint arXiv:1907.11692. β©