Announcing our new Paper: The Prompt Report, with Co-authors from OpenAI & Microsoft!

Check it out →

📚 کتابیات

نے اپ ڈیٹ کیا۔ سینڈر شلہوف کو August 7, 2024 آخری بار

صفحہ اس کورس کے ذریعہ استعمال ہونے والے تمام کاغذات کی ایک منظم فہرست پر مشتمل ہے۔ مقالے عنوان کے لحاظ سے ترتیب دیئے گئے ہیں۔

اس کورس کا حوالہ دینے کے لیے، Github ذخیرہ میں فراہم کردہ حوالہ استعمال کریں۔

@software{Schulhoff_Learn_Prompting_2022,
author = {Schulhoff, Sander and Community Contributors},
month = dec,
title = {{Learn Prompting}},
url = {https://github.com/trigaten/Learn_Prompting},
year = {2022}
}

نوٹ: چونکہ نہ تو GPT-3 اور نہ ہی GPT-3 انسٹرکٹ پیپر ڈیونچی ماڈلز سے مطابقت رکھتا ہے، میں کوشش کرتا ہوں کہ اس طرح ان کا حوالہ دیتے ہیں.

AUTOGENERATED BELOW, DO NOT EDIT

Agents

MRKL1

ReAct2

PAL3

Auto-GPT4

Baby AGI5

AgentGPT6

Toolformer7

Automated

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts8

automatic prompt engineer9

Soft Prompting10

discretized soft prompting (interpreting)11

Datasets

SCAN dataset (compositional generalization)12

GSM8K13

hotpotQA14

multiarith15

fever dataset16

bbq17

Detection

Don't ban chatgpt in schools. teach with it.18

Schools Shouldn't Ban Access to ChatGPT19

Certified Neural Network Watermarks with Randomized Smoothing20

Watermarking Pre-trained Language Models with Backdooring21

GW preparing disciplinary response to AI programs as faculty explore educational use22

A Watermark for Large Language Models23

DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature24

Image Prompt Engineering

Prompt Engineering for Text-Based Generative Art25

The DALLE 2 Prompt Book26

With the right prompt, Stable Diffusion 2.0 can do hands.27

Meta Analysis

How Generative AI Is Changing Creative Work28

How AI Will Change the Workplace29

ChatGPT took their jobs. Now they walk dogs and fix air conditioners.30

No title31

Miscl

The Turking Test: Can Language Models Understand Instructions?32

A Taxonomy of Prompt Modifiers for Text-To-Image Generation33

DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models34

Optimizing Prompts for Text-to-Image Generation35

Language Model Cascades36

Design Guidelines for Prompt Engineering Text-to-Image Generative Models37

Discovering Language Model Behaviors with Model-Written Evaluations38

Selective Annotation Makes Language Models Better Few-Shot Learners39

Atlas: Few-shot Learning with Retrieval Augmented Language Models40

STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension41

Prompting Is Programming: A Query Language For Large Language Models42

Parallel Context Windows Improve In-Context Learning of Large Language Models43

Learning to Perform Complex Tasks through Compositional Fine-Tuning of Language Models44

Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks45

Making Pre-trained Language Models Better Few-shot Learners46

How to Prompt? Opportunities and Challenges of Zero- and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models47

On Measuring Social Biases in Prompt-Based Multi-Task Learning48

Plot Writing From Pre-Trained Language Models49

{S}tereo{S}et: Measuring stereotypical bias in pretrained language models50

Survey of Hallucination in Natural Language Generation51

Wordcraft: Story Writing With Large Language Models52

PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization53

Self-Instruct: Aligning Language Model with Self Generated Instructions54

From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models55

New and improved content moderation tooling56

Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference57

Human-level concept learning through probabilistic program induction58

{Riffusion - Stable diffusion for real-time music generation}59

How to use OpenAI’s ChatGPT to write the perfect cold email60

Cacti: biology and uses61

Are Language Models Worse than Humans at Following Prompts? It’s Complicated62

Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration63

Prompt Hacking

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods64

New jailbreak based on virtual functions - smuggle illegal tokens to the backend.65

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks66

More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models67

ChatGPT "DAN" (and other "Jailbreaks")68

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples69

Prompt injection attacks against GPT-370

Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions71

History Correction72

adversarial-prompts73

GPT-3 Prompt Injection Defenses74

Talking to machines: prompt engineering & injection75

Using GPT-Eliezer against ChatGPT Jailbreaking76

Exploring Prompt Injection Attacks77

The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.)78

Ignore Previous Prompt: Attack Techniques For Language Models79

Lessons learned on Language Model Safety and misuse80

Toxicity Detection with Generative Prompt-based Inference81

ok I saw a few people jailbreaking safeguards openai put on chatgpt so I had to give it a shot myself82

Bypass @OpenAI's ChatGPT alignment efforts with this one weird trick83

ChatGPT jailbreaking itself84

Using "pretend" on #ChatGPT can do some wild stuff. You can kind of get some insight on the future, alternative universe.85

I kinda like this one even more!86

uh oh87

Building A Virtual Machine inside ChatGPT88

Reliability

MathPrompter: Mathematical Reasoning using Large Language Models89

The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning90

Prompting GPT-3 To Be Reliable91

On the Advance of Making Language Models Better Reasoners92

Ask Me Anything: A simple strategy for prompting language models93

Calibrate Before Use: Improving Few-Shot Performance of Language Models94

Can large language models reason about medical questions?95

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference96

On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning97

Evaluating language models can be tricky98

Constitutional AI: Harmlessness from AI Feedback99

Surveys

Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition100

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing101

PromptPapers102

A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT103

Techniques

Chain of Thought Prompting Elicits Reasoning in Large Language Models104

Large Language Models are Zero-Shot Reasoners105

Self-Consistency Improves Chain of Thought Reasoning in Language Models106

What Makes Good In-Context Examples for GPT-3?107

Generated Knowledge Prompting for Commonsense Reasoning108

Recitation-Augmented Language Models109

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?110

Show Your Work: Scratchpads for Intermediate Computation with Language Models111

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations112

STaR: Bootstrapping Reasoning With Reasoning113

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models114

Reframing Instructional Prompts to GPTk’s Language115

Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models116

Role-Play with Large Language Models117

CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society118

TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks119

Models

Image Models

Stable Diffusion120

DALLE121

Language Models

ChatGPT122

GPT-3123

Instruct GPT124

GPT-4125

PaLM: Scaling Language Modeling with Pathways126

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model127

BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting128

Jurassic-1: Technical Details and Evaluation, White paper, AI21 Labs, 2021129

GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model130

Roberta: A robustly optimized bert pretraining approach131

Tooling

Ides

TextBox 2.0: A Text Generation Library with Pre-trained Language Models132

Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models133

PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts134

PromptChainer: Chaining Large Language Model Prompts through Visual Programming135

OpenPrompt: An Open-source Framework for Prompt-learning136

PromptMaker: Prompt-Based Prototyping with Large Language Models137

Tools

LangChain138

GPT Index139

Footnotes

  1. Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., Shoham, Y., Bata, H., Levine, Y., Leyton-Brown, K., Muhlgay, D., Rozen, N., Schwartz, E., Shachaf, G., Shalev-Shwartz, S., Shashua, A., & Tenenholtz, M. (2022).

  2. Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022).

  3. Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., & Neubig, G. (2022).

  4. Significant-Gravitas. (2023). https://news.agpt.co/

  5. Nakajima, Y. (2023). https://github.com/yoheinakajima/babyagi

  6. Reworkd.ai. (2023). https://github.com/reworkd/AgentGPT

  7. Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., & Scialom, T. (2023).

  8. Shin, T., Razeghi, Y., Logan IV, R. L., Wallace, E., & Singh, S. (2020). AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://doi.org/10.18653/v1/2020.emnlp-main.346

  9. Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., & Ba, J. (2022). Large Language Models Are Human-Level Prompt Engineers.

  10. Lester, B., Al-Rfou, R., & Constant, N. (2021). The Power of Scale for Parameter-Efficient Prompt Tuning.

  11. Khashabi, D., Lyu, S., Min, S., Qin, L., Richardson, K., Welleck, S., Hajishirzi, H., Khot, T., Sabharwal, A., Singh, S., & Choi, Y. (2021). Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts.

  12. Lake, B. M., & Baroni, M. (2018). Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks. https://doi.org/10.48550/arXiv.1711.00350

  13. Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., & Schulman, J. (2021). Training Verifiers to Solve Math Word Problems.

  14. Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W. W., Salakhutdinov, R., & Manning, C. D. (2018). HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering.

  15. Roy, S., & Roth, D. (2015). Solving General Arithmetic Word Problems. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 1743–1752. https://doi.org/10.18653/v1/D15-1202

  16. Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification.

  17. Parrish, A., Chen, A., Nangia, N., Padmakumar, V., Phang, J., Thompson, J., Htut, P. M., & Bowman, S. R. (2021). BBQ: A Hand-Built Bias Benchmark for Question Answering.

  18. Roose, K. (2022). Don’t ban chatgpt in schools. teach with it. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html

  19. Lipman, J., & Distler, R. (2023). Schools Shouldn’t Ban Access to ChatGPT. https://time.com/6246574/schools-shouldnt-ban-access-to-chatgpt/

  20. Bansal, A., yeh Ping-Chiang, Curry, M., Jain, R., Wigington, C., Manjunatha, V., Dickerson, J. P., & Goldstein, T. (2022). Certified Neural Network Watermarks with Randomized Smoothing.

  21. Gu, C., Huang, C., Zheng, X., Chang, K.-W., & Hsieh, C.-J. (2022). Watermarking Pre-trained Language Models with Backdooring.

  22. Noonan, E., & Averill, O. (2023). GW preparing disciplinary response to AI programs as faculty explore educational use. https://www.gwhatchet.com/2023/01/17/gw-preparing-disciplinary-response-to-ai-programs-as-faculty-explore-educational-use/

  23. Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023). A Watermark for Large Language Models. https://arxiv.org/abs/2301.10226

  24. Mitchell, E., Lee, Y., Khazatsky, A., Manning, C., & Finn, C. (2023). DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. https://doi.org/10.48550/arXiv.2301.11305

  25. Oppenlaender, J. (2022). Prompt Engineering for Text-Based Generative Art.

  26. Parsons, G. (2022). The DALLE 2 Prompt Book. https://dallery.gallery/the-dalle-2-prompt-book/

  27. Blake. (2022). With the right prompt, Stable Diffusion 2.0 can do hands. https://www.reddit.com/r/StableDiffusion/comments/z7salo/with_the_right_prompt_stable_diffusion_20_can_do/

  28. Davenport, T. H., & Mittal, N. (2022). How Generative AI Is Changing Creative Work. Harvard Business Review. https://hbr.org/2022/11/how-generative-ai-is-changing-creative-work

  29. Captain, S. (2023). How AI Will Change the Workplace. Wall Street Journal. https://www.wsj.com/articles/how-ai-change-workplace-af2162ee

  30. Verma, P., & Vynck, G. D. (2023). ChatGPT took their jobs. Now they walk dogs and fix air conditioners. Washington Post. https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/

  31. Ford, B. (2023). Bloomberg.Com. https://www.bloomberg.com/news/articles/2023-05-01/ibm-to-pause-hiring-for-back-office-jobs-that-ai-could-kill

  32. Efrat, A., & Levy, O. (2020). The Turking Test: Can Language Models Understand Instructions?

  33. Oppenlaender, J. (2022). A Taxonomy of Prompt Modifiers for Text-To-Image Generation.

  34. Wang, Z. J., Montoya, E., Munechika, D., Yang, H., Hoover, B., & Chau, D. H. (2022). DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models.

  35. Hao, Y., Chi, Z., Dong, L., & Wei, F. (2022). Optimizing Prompts for Text-to-Image Generation.

  36. Dohan, D., Xu, W., Lewkowycz, A., Austin, J., Bieber, D., Lopes, R. G., Wu, Y., Michalewski, H., Saurous, R. A., Sohl-dickstein, J., Murphy, K., & Sutton, C. (2022). Language Model Cascades.

  37. Liu, V., & Chilton, L. B. (2022). Design Guidelines for Prompt Engineering Text-to-Image Generative Models. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491102.3501825

  38. Perez, E., Ringer, S., Lukošiūtė, K., Nguyen, K., Chen, E., Heiner, S., Pettit, C., Olsson, C., Kundu, S., Kadavath, S., Jones, A., Chen, A., Mann, B., Israel, B., Seethor, B., McKinnon, C., Olah, C., Yan, D., Amodei, D., … Kaplan, J. (2022). Discovering Language Model Behaviors with Model-Written Evaluations.

  39. Su, H., Kasai, J., Wu, C. H., Shi, W., Wang, T., Xin, J., Zhang, R., Ostendorf, M., Zettlemoyer, L., Smith, N. A., & Yu, T. (2022). Selective Annotation Makes Language Models Better Few-Shot Learners.

  40. Izacard, G., Lewis, P., Lomeli, M., Hosseini, L., Petroni, F., Schick, T., Dwivedi-Yu, J., Joulin, A., Riedel, S., & Grave, E. (2022). Atlas: Few-shot Learning with Retrieval Augmented Language Models.

  41. Wang, B., Feng, C., Nair, A., Mao, M., Desai, J., Celikyilmaz, A., Li, H., Mehdad, Y., & Radev, D. (2022). STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension.

  42. Beurer-Kellner, L., Fischer, M., & Vechev, M. (2022). Prompting Is Programming: A Query Language For Large Language Models.

  43. Ratner, N., Levine, Y., Belinkov, Y., Ram, O., Abend, O., Karpas, E., Shashua, A., Leyton-Brown, K., & Shoham, Y. (2022). Parallel Context Windows Improve In-Context Learning of Large Language Models.

  44. Bursztyn, V. S., Demeter, D., Downey, D., & Birnbaum, L. (2022). Learning to Perform Complex Tasks through Compositional Fine-Tuning of Language Models.

  45. Wang, Y., Mishra, S., Alipoormolabashi, P., Kordi, Y., Mirzaei, A., Arunkumar, A., Ashok, A., Dhanasekaran, A. S., Naik, A., Stap, D., Pathak, E., Karamanolakis, G., Lai, H. G., Purohit, I., Mondal, I., Anderson, J., Kuznia, K., Doshi, K., Patel, M., … Khashabi, D. (2022). Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks.

  46. Gao, T., Fisch, A., & Chen, D. (2021). Making Pre-trained Language Models Better Few-shot Learners. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). https://doi.org/10.18653/v1/2021.acl-long.295

  47. Dang, H., Mecke, L., Lehmann, F., Goller, S., & Buschek, D. (2022). How to Prompt? Opportunities and Challenges of Zero- and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models.

  48. Akyürek, A. F., Paik, S., Kocyigit, M. Y., Akbiyik, S., Runyun, Ş. L., & Wijaya, D. (2022). On Measuring Social Biases in Prompt-Based Multi-Task Learning.

  49. Jin, Y., Kadam, V., & Wanvarie, D. (2022). Plot Writing From Pre-Trained Language Models.

  50. Nadeem, M., Bethke, A., & Reddy, S. (2021). StereoSet: Measuring stereotypical bias in pretrained language models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 5356–5371. https://doi.org/10.18653/v1/2021.acl-long.416

  51. Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y., Madotto, A., & Fung, P. (2022). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys. https://doi.org/10.1145/3571730

  52. Yuan, A., Coenen, A., Reif, E., & Ippolito, D. (2022). Wordcraft: Story Writing With Large Language Models. 27th International Conference on Intelligent User Interfaces, 841–852.

  53. Fadnavis, S., Dhurandhar, A., Norel, R., Reinen, J. M., Agurto, C., Secchettin, E., Schweiger, V., Perini, G., & Cecchi, G. (2022). PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization. arXiv Preprint arXiv:2209.09814.

  54. Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N. A., Khashabi, D., & Hajishirzi, H. (2022). Self-Instruct: Aligning Language Model with Self Generated Instructions.

  55. Guo, J., Li, J., Li, D., Tiong, A. M. H., Li, B., Tao, D., & Hoi, S. C. H. (2022). From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models.

  56. Markov, T. (2022). New and improved content moderation tooling. In OpenAI. OpenAI. https://openai.com/blog/new-and-improved-content-moderation-tooling/

  57. Schick, T., & Schütze, H. (2020). Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference.

  58. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338.

  59. Forsgren, S., & Martiros, H. (2022). Riffusion - Stable diffusion for real-time music generation. https://riffusion.com/about

  60. Bonta, A. (2022). How to use OpenAI’s ChatGPT to write the perfect cold email. https://www.streak.com/post/how-to-use-ai-to-write-perfect-cold-emails

  61. Nobel, P. S., & others. (2002). Cacti: biology and uses. Univ of California Press.

  62. Webson, A., Loo, A. M., Yu, Q., & Pavlick, E. (2023). Are Language Models Worse than Humans at Following Prompts? It’s Complicated. arXiv:2301.07085v1 [Cs.CL].

  63. Wang, Z., Mao, S., Wu, W., Ge, T., Wei, F., & Ji, H. (2023). Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration.

  64. Crothers, E., Japkowicz, N., & Viktor, H. (2022). Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods.

  65. u/Nin_kat. (2023). New jailbreak based on virtual functions - smuggle illegal tokens to the backend. https://www.reddit.com/r/ChatGPT/comments/10urbdj/new_jailbreak_based_on_virtual_functions_smuggle

  66. Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., & Hashimoto, T. (2023). Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks.

  67. Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz, T., & Fritz, M. (2023). More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models.

  68. KIHO, L. (2023). ChatGPT “DAN” (and other “Jailbreaks”). https://github.com/0xk1h0/ChatGPT_DAN

  69. Branch, H. J., Cefalu, J. R., McHugh, J., Hujer, L., Bahl, A., del Castillo Iglesias, D., Heichman, R., & Darwishi, R. (2022). Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples.

  70. Willison, S. (2022). Prompt injection attacks against GPT-3. https://simonwillison.net/2022/Sep/12/prompt-injection/

  71. Goodside, R. (2022). Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions. https://twitter.com/goodside/status/1569128808308957185

  72. Goodside, R. (2023). History Correction. https://twitter.com/goodside/status/1610110111791325188?s=20&t=ulviQABPXFIIt4ZNZPAUCQ

  73. Chase, H. (2022). adversarial-prompts. https://github.com/hwchase17/adversarial-prompts

  74. Goodside, R. (2022). GPT-3 Prompt Injection Defenses. https://twitter.com/goodside/status/1578278974526222336?s=20&t=3UMZB7ntYhwAk3QLpKMAbw

  75. Mark, C. (2022). Talking to machines: prompt engineering & injection. https://artifact-research.com/artificial-intelligence/talking-to-machines-prompt-engineering-injection/

  76. Stuart Armstrong, R. G. (2022). Using GPT-Eliezer against ChatGPT Jailbreaking. https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking

  77. Selvi, J. (2022). Exploring Prompt Injection Attacks. https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/

  78. Liu, K. (2023). The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.). https://twitter.com/kliu128/status/1623472922374574080

  79. Perez, F., & Ribeiro, I. (2022). Ignore Previous Prompt: Attack Techniques For Language Models. arXiv. https://doi.org/10.48550/ARXIV.2211.09527

  80. Brundage, M. (2022). Lessons learned on Language Model Safety and misuse. In OpenAI. OpenAI. https://openai.com/blog/language-model-safety-and-misuse/

  81. Wang, Y.-S., & Chang, Y. (2022). Toxicity Detection with Generative Prompt-based Inference. arXiv. https://doi.org/10.48550/ARXIV.2205.12390

  82. Maz, A. (2022). ok I saw a few people jailbreaking safeguards openai put on chatgpt so I had to give it a shot myself. https://twitter.com/alicemazzy/status/1598288519301976064

  83. Piedrafita, M. (2022). Bypass @OpenAI’s ChatGPT alignment efforts with this one weird trick. https://twitter.com/m1guelpf/status/1598203861294252033

  84. Parfait, D. (2022). ChatGPT jailbreaking itself. https://twitter.com/haus_cole/status/1598541468058390534

  85. Soares, N. (2022). Using “pretend” on #ChatGPT can do some wild stuff. You can kind of get some insight on the future, alternative universe. https://twitter.com/NeroSoares/status/1608527467265904643

  86. Moran, N. (2022). I kinda like this one even more! https://twitter.com/NickEMoran/status/1598101579626057728

  87. samczsun. (2022). uh oh. https://twitter.com/samczsun/status/1598679658488217601

  88. Degrave, J. (2022). Building A Virtual Machine inside ChatGPT. Engraved. https://www.engraved.blog/building-a-virtual-machine-inside/

  89. Imani, S., Du, L., & Shrivastava, H. (2023). MathPrompter: Mathematical Reasoning using Large Language Models.

  90. Ye, X., & Durrett, G. (2022). The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning.

  91. Si, C., Gan, Z., Yang, Z., Wang, S., Wang, J., Boyd-Graber, J., & Wang, L. (2022). Prompting GPT-3 To Be Reliable.

  92. Li, Y., Lin, Z., Zhang, S., Fu, Q., Chen, B., Lou, J.-G., & Chen, W. (2022). On the Advance of Making Language Models Better Reasoners.

  93. Arora, S., Narayan, A., Chen, M. F., Orr, L., Guha, N., Bhatia, K., Chami, I., Sala, F., & Ré, C. (2022). Ask Me Anything: A simple strategy for prompting language models.

  94. Zhao, T. Z., Wallace, E., Feng, S., Klein, D., & Singh, S. (2021). Calibrate Before Use: Improving Few-Shot Performance of Language Models.

  95. Liévin, V., Hother, C. E., & Winther, O. (2022). Can large language models reason about medical questions?

  96. Mitchell, E., Noh, J. J., Li, S., Armstrong, W. S., Agarwal, A., Liu, P., Finn, C., & Manning, C. D. (2022). Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference.

  97. Shaikh, O., Zhang, H., Held, W., Bernstein, M., & Yang, D. (2022). On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning.

  98. Chase, H. (2022). Evaluating language models can be tricky. https://twitter.com/hwchase17/status/1607428141106008064

  99. Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E., … Kaplan, J. (2022). Constitutional AI: Harmlessness from AI Feedback.

  100. Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Prentice Hall.

  101. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2022). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys. https://doi.org/10.1145/3560815

  102. Ding, N., & Hu, S. (2022). PromptPapers. https://github.com/thunlp/PromptPapers

  103. White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT.

  104. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models.

  105. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners.

  106. Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., & Zhou, D. (2022). Self-Consistency Improves Chain of Thought Reasoning in Language Models.

  107. Liu, J., Shen, D., Zhang, Y., Dolan, B., Carin, L., & Chen, W. (2022). What Makes Good In-Context Examples for GPT-3? Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. https://doi.org/10.18653/v1/2022.deelio-1.10

  108. Liu, J., Liu, A., Lu, X., Welleck, S., West, P., Bras, R. L., Choi, Y., & Hajishirzi, H. (2021). Generated Knowledge Prompting for Commonsense Reasoning.

  109. Sun, Z., Wang, X., Tay, Y., Yang, Y., & Zhou, D. (2022). Recitation-Augmented Language Models.

  110. Min, S., Lyu, X., Holtzman, A., Artetxe, M., Lewis, M., Hajishirzi, H., & Zettlemoyer, L. (2022). Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

  111. Nye, M., Andreassen, A. J., Gur-Ari, G., Michalewski, H., Austin, J., Bieber, D., Dohan, D., Lewkowycz, A., Bosma, M., Luan, D., Sutton, C., & Odena, A. (2021). Show Your Work: Scratchpads for Intermediate Computation with Language Models.

  112. Jung, J., Qin, L., Welleck, S., Brahman, F., Bhagavatula, C., Bras, R. L., & Choi, Y. (2022). Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations.

  113. Zelikman, E., Wu, Y., Mu, J., & Goodman, N. D. (2022). STaR: Bootstrapping Reasoning With Reasoning.

  114. Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., & Chi, E. (2022). Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.

  115. Mishra, S., Khashabi, D., Baral, C., Choi, Y., & Hajishirzi, H. (2022). Reframing Instructional Prompts to GPTk’s Language. Findings of the Association for Computational Linguistics: ACL 2022. https://doi.org/10.18653/v1/2022.findings-acl.50

  116. Logan IV, R., Balazevic, I., Wallace, E., Petroni, F., Singh, S., & Riedel, S. (2022). Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models. Findings of the Association for Computational Linguistics: ACL 2022, 2824–2835. https://doi.org/10.18653/v1/2022.findings-acl.222

  117. Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role-Play with Large Language Models.

  118. Li, G., Hammoud, H. A. A. K., Itani, H., Khizbullin, D., & Ghanem, B. (2023). CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society.

  119. Santu, S. K. K., & Feng, D. (2023). TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks.

  120. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2021). High-Resolution Image Synthesis with Latent Diffusion Models.

  121. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical Text-Conditional Image Generation with CLIP Latents.

  122. OpenAI. (2022). ChatGPT: Optimizing Language Models for Dialogue. https://openai.com/blog/chatgpt/. https://openai.com/blog/chatgpt/

  123. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners.

  124. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback.

  125. OpenAI. (2023). GPT-4 Technical Report.

  126. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., … Fiedel, N. (2022). PaLM: Scaling Language Modeling with Pathways.

  127. Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ilić, S., Hesslow, D., Castagné, R., Luccioni, A. S., Yvon, F., Gallé, M., Tow, J., Rush, A. M., Biderman, S., Webson, A., Ammanamanchi, P. S., Wang, T., Sagot, B., Muennighoff, N., del Moral, A. V., … Wolf, T. (2022). BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.

  128. Yong, Z.-X., Schoelkopf, H., Muennighoff, N., Aji, A. F., Adelani, D. I., Almubarak, K., Bari, M. S., Sutawika, L., Kasai, J., Baruwa, A., Winata, G. I., Biderman, S., Radev, D., & Nikoulina, V. (2022). BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting.

  129. Lieber, O., Sharir, O., Lentz, B., & Shoham, Y. (2021). Jurassic-1: Technical Details and Evaluation, White paper, AI21 Labs, 2021. URL: Https://Uploads-Ssl. Webflow. Com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_ Tech_paper. Pdf.

  130. Wang, B., & Komatsuzaki, A. (2021). GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax. https://github.com/kingoflolz/mesh-transformer-jax

  131. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv Preprint arXiv:1907.11692.

  132. Tang, T., Junyi, L., Chen, Z., Hu, Y., Yu, Z., Dai, W., Dong, Z., Cheng, X., Wang, Y., Zhao, W., Nie, J., & Wen, J.-R. (2022). TextBox 2.0: A Text Generation Library with Pre-trained Language Models.

  133. Strobelt, H., Webson, A., Sanh, V., Hoover, B., Beyer, J., Pfister, H., & Rush, A. M. (2022). Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models. arXiv. https://doi.org/10.48550/ARXIV.2208.07852

  134. Bach, S. H., Sanh, V., Yong, Z.-X., Webson, A., Raffel, C., Nayak, N. V., Sharma, A., Kim, T., Bari, M. S., Fevry, T., Alyafeai, Z., Dey, M., Santilli, A., Sun, Z., Ben-David, S., Xu, C., Chhablani, G., Wang, H., Fries, J. A., … Rush, A. M. (2022). PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts.

  135. Wu, T., Jiang, E., Donsbach, A., Gray, J., Molina, A., Terry, M., & Cai, C. J. (2022). PromptChainer: Chaining Large Language Model Prompts through Visual Programming.

  136. Ding, N., Hu, S., Zhao, W., Chen, Y., Liu, Z., Zheng, H.-T., & Sun, M. (2021). OpenPrompt: An Open-source Framework for Prompt-learning. arXiv Preprint arXiv:2111.01998.

  137. Jiang, E., Olson, K., Toh, E., Molina, A., Donsbach, A., Terry, M., & Cai, C. J. (2022). PromptMaker: Prompt-Based Prototyping with Large Language Models. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491101.3503564

  138. Chase, H. (2022). LangChain (0.0.66) [Computer software]. https://github.com/hwchase17/langchain

  139. Liu, J. (2022). GPT Index. https://doi.org/10.5281/zenodo.1234

Word count: 0

Get AI Certified by Learn Prompting


Copyright © 2024 Learn Prompting.