Analogical Prompting
Last updated on November 12, 2024
Analogical Prompting enhances the problem-solving capabilities of LLMs by initially prompting them to generate relevant concepts and questions, and afterwards the LLM addresses the original prompt. For each question generated, the LLM is tasked with both describing the problem and explaining the solution. This approach has demonstrated notable improvements in mathematical reasoning and code generation tasks, outperforming Zero-Shot and Few-Shot CoT methods.
Footnotes
-
Yasunaga, M., Chen, X., Li, Y., Pasupat, P., Leskovec, J., Liang, P., Chi, E. H., & Zhou, D. (2023). Large language models as analogical reasoners. arXiv Preprint arXiv:2310.01714. ↩
-
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., & others. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9. ↩
