Announcing our new Course: AI Red-Teaming and AI Safety Masterclass

Check it out →

Daftar Pustaka

Kemas kini terakhir pada August 7, 2024 oleh Sander Schulhoff

Halaman ini berisi daftar terorganisir dari semua makalah yang digunakan oleh kursus ini. Makalah-makalah tersebut diatur berdasarkan topik.

Untuk mengutip kursus ini, gunakan kutipan yang disediakan di repositori Github.

@software{Schulhoff_Learn_Prompting_2022,
author = {Schulhoff, Sander and Community Contributors},
month = dec,
title = {{Learn Prompting}},
url = {https://github.com/trigaten/Learn_Prompting},
year = {2022}
}

Catatan: karena baik GPT-3 maupun GPT-3 Instruct paper tidak sesuai dengan model davinci, saya berusaha untuk tidak mengutipnya sebagai model tersebut.

AUTOGENERATED BELOW, DO NOT EDIT

Agen

MRKL1

ReAct2

PAL3

Auto-GPT4

Baby AGI5

AgentGPT6

Toolformer7

Otomatisasi

AutoPrompt: Mengumpulkan Pengetahuan dari Model Bahasa dengan Prompts yang Dibuat Secara Otomatis8

automatic prompt engineer9

Soft Prompting10

discretized soft prompting (interpreting)11

Dataset

SCAN dataset (compositional generalization)12

GSM8K13

hotpotQA14

multiarith15

fever dataset16

bbq17

Pendeteksi

Jangan melarang chatgpt di sekolah. mengajar dengan chatgpt.18

Sekolah-sekolah Sebaiknya Tidak Melarang Akses ke ChatGPT19

Certified Neural Network Watermarks with Randomized Smoothing20

Watermarking Pre-trained Language Models dengan Backdooring21

GW menyiapkan respons disiplin terhadap program AI saat fakultas menjelajahi penggunaan pendidikan22

A Watermark for Large Language Models23

DetectGPT: Deteksi Teks yang Dibuat oleh Mesin 'Zero-Shot' dengan Menggunakan Probabilitas Kurva24

Prompt Engineering untuk Gambar

Prompt Engineering for Text-Based Generative Art25

The DALLE 2 Prompt Book26

With the right prompt, Stable Diffusion 2.0 can do hands.27

Serba Aneka

The Turking Test: Can Language Models Understand Instructions?28

Taksonomi Pengubah Prompt untuk Menghasilkan Text-To-Image29

DiffusionDB: Dataset Galeri Prompt Skala Besar untuk Model Generatif Text-To-Image30

Optimizing Prompts for Text-to-Image Generation31

Language Model Cascades32

Design Guidelines for Prompt Engineering Text-to-Image Generative Models33

Discovering Language Model Behaviors with Model-Written Evaluations34

Selective Annotation Makes Language Models Better Few-Shot Learners35

Atlas: Few-shot Learning with Retrieval Augmented Language Models36

STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension37

Prompting Is Programming: A Query Language For Large Language Models38

Parallel Context Windows Improve In-Context Learning of Large Language Models39

Learning to Perform Complex Tasks through Compositional Fine-Tuning of Language Models40

Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks41

Making Pre-trained Language Models Better Few-shot Learners42

How to Prompt? Opportunities and Challenges of Zero- and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models43

On Measuring Social Biases in Prompt-Based Multi-Task Learning44

Plot Writing From Pre-Trained Language Models45

{S}tereo{S}et: Mengukur bias stereotip dalam model bahasa terlatih sebelumnya46

Survey of Hallucination in Natural Language Generation47

Wordcraft: Menulis Cerita dengan Model Bahasa Besar48

PainPoints: Sebuah Kerangka Kerja untuk Deteksi Nyeri Kronis berbasis Bahasa dan Ringkasan Teks Kolaboratif Ahli49

Self-Instruct: Aligning Language Model with Self Generated Instructions50

From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models51

New and improved content moderation tooling52

No title53

Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference54

Pembelajaran konsep level manusia melalui induksi program probabilistik55

{Riffusion - Stable diffusion for real-time music generation}56

Cara menggunakan ChatGPT dari OpenAI untuk menulis cold email yang sempurna57

Cacti: biology and uses58

Apakah Model Bahasa Lebih Buruk daripada Manusia dalam Mengikuti Petunjuk? Ini Rumit59

Mengungkap Kebersamaan Kognitif dalam Model Bahasa Besar: Agen Penyelesaian Tugas melalui Kolaborasi Diri Multi-Persona60

Prompt Hacking

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods61

Jebol baru berdasarkan fungsi virtual - menyelundupkan token ilegal ke backend.62

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks63

More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models64

ChatGPT "DAN" (and other "Jailbreaks")65

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples66

Prompt injection attacks against GPT-367

Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions68

History Correction69

adversarial-prompts70

GPT-3 Prompt Injection Defenses71

Talking to machines: prompt engineering & injection72

Using GPT-Eliezer against ChatGPT Jailbreaking73

Exploring Prompt Injection Attacks74

Seluruh permintaan Bing Chat Microsoft?! (Halo, Sydney.)75

Ignore Previous Prompt: Attack Techniques For Language Models76

Lessons learned on Language Model Safety and misuse77

Toxicity Detection with Generative Prompt-based Inference78

ok saya melihat beberapa orang membobol perlindungan yang diberikan oleh openai pada chatgpt, jadi saya harus mencobanya sendiri79

Melewati upaya penyelarasan ChatGPT @OpenAI dengan trik aneh ini80

ChatGPT membobol dirinya sendiri81

Menggunakan "pretend" di #ChatGPT bisa melakukan beberapa hal yang luar biasa. Anda dapat sedikit mendapatkan wawasan tentang masa depan, alam semesta alternatif.82

Aku agak lebih suka yang ini, bahkan lebih!83

uh oh84

Membangun Mesin Virtual di dalam ChatGPT85

Keandalan

MathPrompter: Reasoning Matematika menggunakan Model Bahasa Besar86

The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning87

Prompting GPT-3 To Be Reliable88

Pada Kemajuan dalam Meningkatkan Model Bahasa Sebagai Pemikir yang Lebih Baik89

Tanyakan Apa Saja pada Saya: Sebuah strategi sederhana untuk memicu model bahasa90

Calibrate Before Use: Improving Few-Shot Performance of Language Models91

Apakah model bahasa besar dapat melakukan penalaran tentang pertanyaan medis?92

Meningkatkan Konsistensi Diri dan Performa dari Model Bahasa Pra-terlatih melalui Inferensi Bahasa Alami93

Kalau Dipikir-pikir Lagi, Mari Kita Tidak Berpikir Langkah demi Langkah! Bias dan Toxicity pda Zero-Shot Reasoning94

Mengevaluasi model bahasa bisa saja sulit95

Survey

Speech and Language Processing: Pengantar Pemrosesan Bahasa Alami, Linguistik Komputasional, dan Pengenalan Suara96

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing97

PromptPapers98

A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT99

Teknik

Chain of Thought Prompting Penalaran dalam Model Bahasa Besar100

Large Language Model adalah Zero-Shot Reasoners101

Ketetapan Diri Meningkatkan Rantai Berpikir Penalaran pada Model Bahasa102

What Makes Good In-Context Examples for GPT-3?103

Prompt Pengetahuan yang Dihasilkan untuk Penalaran Wajar104

Recitation-Augmented Language Models105

Mempertimbangkan Kembali Peran Demonstrasi: Apa yang Membuat Pembelajaran Kontekstual Bekerja?106

Tunjukkan Pekerjaan Anda: Scratchpads untuk Komputasi Menengah dengan Model Bahasa107

Maieutic Prompting: Penalaran yang Logis dan Konsisten dengan Penjelasan Rekursif108

STaR: Memulai Penalaran Dengan Penalaran109

Prompt Least-to-Most Memungkinkan Pemikiran Kompleks dalam Model Bahasa Besar110

Reframing Instructional Prompts to GPTk’s Language111

Memangkas Prompt dan Parameter: Pembelajaran Few-Shot Sederhana dengan Model Bahasa112

Role-Play dengan Model Bahasa Besar113

CAMEL: Agen Komunikatif untuk "Eksplorasi" Pikiran Masyarakat Model Bahasa Skala Besar114

TELeR: Taksonomi Umum dari LLM Prompts untuk Benchmarking Tugas Kompleks115

Model

Model Gambar

Stable Diffusion116

DALLE117

Model Bahasa

ChatGPT118

GPT-3119

Instruct GPT120

GPT-4121

PaLM: Memperbesar Pembentukan Bahasa dengan Pathways122

BLOOM: Sebuah Model Bahasa Multilingual Open-Access dengan 176B Parameter123

BLOOM+1: Menambahkan Dukungan Bahasa ke BLOOM untuk Prompt Zero-Shot124

Jurassic-1: Detail Teknis dan Evaluasi, White paper, AI21 Labs, 2021125

GPT-J-6B: Sebuah Model Bahasa Autoregresif dengan 6 Miliar Parameter126

Roberta: Pendekatan pra-pelatihan bert yang dioptimalkan secara kuat127

Tooling

Ides

TextBox 2.0: A Text Generation Library with Pre-trained Language Models128

Prompt Engineering Interaktif dan Visual untuk Adaptasi Tugas Ad-hoc dengan Model Bahasa Besar129

PromptSource: Lingkungan Pengembangan Terpadu dan Repositori untuk Promp Bahasa Alami130

PromptChainer: Menghubungkan Prompt Model Bahasa yang Besar melalui Pemrograman Visual131

OpenPrompt: An Open-source Framework for Prompt-learning132

PromptMaker: Prompt-Based Prototyping dengan Large Language Models133

Tools

LangChain134

GPT Index135

Footnotes

  1. Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., Shoham, Y., Bata, H., Levine, Y., Leyton-Brown, K., Muhlgay, D., Rozen, N., Schwartz, E., Shachaf, G., Shalev-Shwartz, S., Shashua, A., & Tenenholtz, M. (2022).

  2. Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022).

  3. Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., & Neubig, G. (2022).

  4. Significant-Gravitas. (2023). https://news.agpt.co/

  5. Nakajima, Y. (2023). https://github.com/yoheinakajima/babyagi

  6. Reworkd.ai. (2023). https://github.com/reworkd/AgentGPT

  7. Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., & Scialom, T. (2023).

  8. Shin, T., Razeghi, Y., Logan IV, R. L., Wallace, E., & Singh, S. (2020). Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv Preprint arXiv:2010.15980.

  9. Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., & Ba, J. (2022). Large Language Models Are Human-Level Prompt Engineers.

  10. Lester, B., Al-Rfou, R., & Constant, N. (2021). The Power of Scale for Parameter-Efficient Prompt Tuning.

  11. Khashabi, D., Lyu, S., Min, S., Qin, L., Richardson, K., Welleck, S., Hajishirzi, H., Khot, T., Sabharwal, A., Singh, S., & Choi, Y. (2021). Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts.

  12. Lake, B. M., & Baroni, M. (2018). Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks. https://doi.org/10.48550/arXiv.1711.00350

  13. Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., & Schulman, J. (2021). Training Verifiers to Solve Math Word Problems.

  14. Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W. W., Salakhutdinov, R., & Manning, C. D. (2018). HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering.

  15. Roy, S., & Roth, D. (2015). Solving General Arithmetic Word Problems. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 1743–1752. https://doi.org/10.18653/v1/D15-1202

  16. Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification.

  17. Parrish, A., Chen, A., Nangia, N., Padmakumar, V., Phang, J., Thompson, J., Htut, P. M., & Bowman, S. R. (2021). BBQ: A Hand-Built Bias Benchmark for Question Answering.

  18. Roose, K. (2022). Don’t ban chatgpt in schools. teach with it. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html

  19. Lipman, J., & Distler, R. (2023). Schools Shouldn’t Ban Access to ChatGPT. https://time.com/6246574/schools-shouldnt-ban-access-to-chatgpt/

  20. Bansal, A., yeh Ping-Chiang, Curry, M., Jain, R., Wigington, C., Manjunatha, V., Dickerson, J. P., & Goldstein, T. (2022). Certified Neural Network Watermarks with Randomized Smoothing.

  21. Gu, C., Huang, C., Zheng, X., Chang, K.-W., & Hsieh, C.-J. (2022). Watermarking Pre-trained Language Models with Backdooring.

  22. Noonan, E., & Averill, O. (2023). GW preparing disciplinary response to AI programs as faculty explore educational use. https://www.gwhatchet.com/2023/01/17/gw-preparing-disciplinary-response-to-ai-programs-as-faculty-explore-educational-use/

  23. Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023). A Watermark for Large Language Models. https://arxiv.org/abs/2301.10226

  24. Mitchell, E., Lee, Y., Khazatsky, A., Manning, C., & Finn, C. (2023). DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. https://doi.org/10.48550/arXiv.2301.11305

  25. Oppenlaender, J. (2022). Prompt Engineering for Text-Based Generative Art.

  26. Parsons, G. (2022). The DALLE 2 Prompt Book. https://dallery.gallery/the-dalle-2-prompt-book/

  27. Blake. (2022). With the right prompt, Stable Diffusion 2.0 can do hands. https://www.reddit.com/r/StableDiffusion/comments/z7salo/with_the_right_prompt_stable_diffusion_20_can_do/

  28. Efrat, A., & Levy, O. (2020). The Turking Test: Can Language Models Understand Instructions?

  29. Oppenlaender, J. (2022). A Taxonomy of Prompt Modifiers for Text-To-Image Generation.

  30. Wang, Z. J., Montoya, E., Munechika, D., Yang, H., Hoover, B., & Chau, D. H. (2022). DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models.

  31. Hao, Y., Chi, Z., Dong, L., & Wei, F. (2022). Optimizing Prompts for Text-to-Image Generation.

  32. Dohan, D., Xu, W., Lewkowycz, A., Austin, J., Bieber, D., Lopes, R. G., Wu, Y., Michalewski, H., Saurous, R. A., Sohl-dickstein, J., Murphy, K., & Sutton, C. (2022). Language Model Cascades.

  33. Liu, V., & Chilton, L. B. (2022). Design Guidelines for Prompt Engineering Text-to-Image Generative Models. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491102.3501825

  34. Perez, E., Ringer, S., Lukošiūtė, K., Nguyen, K., Chen, E., Heiner, S., Pettit, C., Olsson, C., Kundu, S., Kadavath, S., Jones, A., Chen, A., Mann, B., Israel, B., Seethor, B., McKinnon, C., Olah, C., Yan, D., Amodei, D., … Kaplan, J. (2022). Discovering Language Model Behaviors with Model-Written Evaluations.

  35. Su, H., Kasai, J., Wu, C. H., Shi, W., Wang, T., Xin, J., Zhang, R., Ostendorf, M., Zettlemoyer, L., Smith, N. A., & Yu, T. (2022). Selective Annotation Makes Language Models Better Few-Shot Learners.

  36. Izacard, G., Lewis, P., Lomeli, M., Hosseini, L., Petroni, F., Schick, T., Dwivedi-Yu, J., Joulin, A., Riedel, S., & Grave, E. (2022). Atlas: Few-shot Learning with Retrieval Augmented Language Models.

  37. Wang, B., Feng, C., Nair, A., Mao, M., Desai, J., Celikyilmaz, A., Li, H., Mehdad, Y., & Radev, D. (2022). STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension.

  38. Beurer-Kellner, L., Fischer, M., & Vechev, M. (2022). Prompting Is Programming: A Query Language For Large Language Models.

  39. Ratner, N., Levine, Y., Belinkov, Y., Ram, O., Abend, O., Karpas, E., Shashua, A., Leyton-Brown, K., & Shoham, Y. (2022). Parallel Context Windows Improve In-Context Learning of Large Language Models.

  40. Bursztyn, V. S., Demeter, D., Downey, D., & Birnbaum, L. (2022). Learning to Perform Complex Tasks through Compositional Fine-Tuning of Language Models.

  41. Wang, Y., Mishra, S., Alipoormolabashi, P., Kordi, Y., Mirzaei, A., Arunkumar, A., Ashok, A., Dhanasekaran, A. S., Naik, A., Stap, D., Pathak, E., Karamanolakis, G., Lai, H. G., Purohit, I., Mondal, I., Anderson, J., Kuznia, K., Doshi, K., Patel, M., … Khashabi, D. (2022). Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks.

  42. Gao, T., Fisch, A., & Chen, D. (2021). Making Pre-trained Language Models Better Few-shot Learners. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). https://doi.org/10.18653/v1/2021.acl-long.295

  43. Dang, H., Mecke, L., Lehmann, F., Goller, S., & Buschek, D. (2022). How to Prompt? Opportunities and Challenges of Zero- and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models.

  44. Akyürek, A. F., Paik, S., Kocyigit, M. Y., Akbiyik, S., Runyun, Ş. L., & Wijaya, D. (2022). On Measuring Social Biases in Prompt-Based Multi-Task Learning.

  45. Jin, Y., Kadam, V., & Wanvarie, D. (2022). Plot Writing From Pre-Trained Language Models.

  46. Nadeem, M., Bethke, A., & Reddy, S. (2021). StereoSet: Measuring stereotypical bias in pretrained language models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 5356–5371. https://doi.org/10.18653/v1/2021.acl-long.416

  47. Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y., Madotto, A., & Fung, P. (2022). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys. https://doi.org/10.1145/3571730

  48. Yuan, A., Coenen, A., Reif, E., & Ippolito, D. (2022). Wordcraft: Story Writing With Large Language Models. 27th International Conference on Intelligent User Interfaces, 841–852.

  49. Fadnavis, S., Dhurandhar, A., Norel, R., Reinen, J. M., Agurto, C., Secchettin, E., Schweiger, V., Perini, G., & Cecchi, G. (2022). PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization. arXiv Preprint arXiv:2209.09814.

  50. Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N. A., Khashabi, D., & Hajishirzi, H. (2022). Self-Instruct: Aligning Language Model with Self Generated Instructions.

  51. Guo, J., Li, J., Li, D., Tiong, A. M. H., Li, B., Tao, D., & Hoi, S. C. H. (2022). From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models.

  52. Markov, T. (2022). New and improved content moderation tooling. In OpenAI. OpenAI. https://openai.com/blog/new-and-improved-content-moderation-tooling/

  53. OpenAI. (2022). https://beta.openai.com/docs/guides/moderation

  54. Schick, T., & Schütze, H. (2020). Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference.

  55. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338.

  56. Forsgren, S., & Martiros, H. (2022). Riffusion - Stable diffusion for real-time music generation. https://riffusion.com/about

  57. Bonta, A. (2022). How to use OpenAI’s ChatGPT to write the perfect cold email. https://www.streak.com/post/how-to-use-ai-to-write-perfect-cold-emails

  58. Nobel, P. S., & others. (2002). Cacti: biology and uses. Univ of California Press.

  59. Webson, A., Loo, A. M., Yu, Q., & Pavlick, E. (2023). Are Language Models Worse than Humans at Following Prompts? It’s Complicated. arXiv:2301.07085v1 [Cs.CL].

  60. Wang, Z., Mao, S., Wu, W., Ge, T., Wei, F., & Ji, H. (2023). Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration.

  61. Crothers, E., Japkowicz, N., & Viktor, H. (2022). Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods.

  62. u/Nin_kat. (2023). New jailbreak based on virtual functions - smuggle illegal tokens to the backend. https://www.reddit.com/r/ChatGPT/comments/10urbdj/new_jailbreak_based_on_virtual_functions_smuggle

  63. Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., & Hashimoto, T. (2023). Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks.

  64. Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz, T., & Fritz, M. (2023). More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models.

  65. KIHO, L. (2023). ChatGPT “DAN” (and other “Jailbreaks”). https://github.com/0xk1h0/ChatGPT_DAN

  66. Branch, H. J., Cefalu, J. R., McHugh, J., Hujer, L., Bahl, A., del Castillo Iglesias, D., Heichman, R., & Darwishi, R. (2022). Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples.

  67. Willison, S. (2022). Prompt injection attacks against GPT-3. https://simonwillison.net/2022/Sep/12/prompt-injection/

  68. Goodside, R. (2022). Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions. https://twitter.com/goodside/status/1569128808308957185

  69. Goodside, R. (2023). History Correction. https://twitter.com/goodside/status/1610110111791325188?s=20&t=ulviQABPXFIIt4ZNZPAUCQ

  70. Chase, H. (2022). adversarial-prompts. https://github.com/hwchase17/adversarial-prompts

  71. Goodside, R. (2022). GPT-3 Prompt Injection Defenses. https://twitter.com/goodside/status/1578278974526222336?s=20&t=3UMZB7ntYhwAk3QLpKMAbw

  72. Mark, C. (2022). Talking to machines: prompt engineering & injection. https://artifact-research.com/artificial-intelligence/talking-to-machines-prompt-engineering-injection/

  73. Stuart Armstrong, R. G. (2022). Using GPT-Eliezer against ChatGPT Jailbreaking. https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking

  74. Selvi, J. (2022). Exploring Prompt Injection Attacks. https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/

  75. Liu, K. (2023). The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.). https://twitter.com/kliu128/status/1623472922374574080

  76. Perez, F., & Ribeiro, I. (2022). Ignore Previous Prompt: Attack Techniques For Language Models. arXiv. https://doi.org/10.48550/ARXIV.2211.09527

  77. Brundage, M. (2022). Lessons learned on Language Model Safety and misuse. In OpenAI. OpenAI. https://openai.com/blog/language-model-safety-and-misuse/

  78. Wang, Y.-S., & Chang, Y. (2022). Toxicity Detection with Generative Prompt-based Inference. arXiv. https://doi.org/10.48550/ARXIV.2205.12390

  79. Maz, A. (2022). ok I saw a few people jailbreaking safeguards openai put on chatgpt so I had to give it a shot myself. https://twitter.com/alicemazzy/status/1598288519301976064

  80. Piedrafita, M. (2022). Bypass @OpenAI’s ChatGPT alignment efforts with this one weird trick. https://twitter.com/m1guelpf/status/1598203861294252033

  81. Parfait, D. (2022). ChatGPT jailbreaking itself. https://twitter.com/haus_cole/status/1598541468058390534

  82. Soares, N. (2022). Using “pretend” on #ChatGPT can do some wild stuff. You can kind of get some insight on the future, alternative universe. https://twitter.com/NeroSoares/status/1608527467265904643

  83. Moran, N. (2022). I kinda like this one even more! https://twitter.com/NickEMoran/status/1598101579626057728

  84. samczsun. (2022). uh oh. https://twitter.com/samczsun/status/1598679658488217601

  85. Degrave, J. (2022). Building A Virtual Machine inside ChatGPT. Engraved. https://www.engraved.blog/building-a-virtual-machine-inside/

  86. Imani, S., Du, L., & Shrivastava, H. (2023). MathPrompter: Mathematical Reasoning using Large Language Models.

  87. Ye, X., & Durrett, G. (2022). The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning.

  88. Si, C., Gan, Z., Yang, Z., Wang, S., Wang, J., Boyd-Graber, J., & Wang, L. (2022). Prompting GPT-3 To Be Reliable.

  89. Li, Y., Lin, Z., Zhang, S., Fu, Q., Chen, B., Lou, J.-G., & Chen, W. (2022). On the Advance of Making Language Models Better Reasoners.

  90. Arora, S., Narayan, A., Chen, M. F., Orr, L., Guha, N., Bhatia, K., Chami, I., Sala, F., & Ré, C. (2022). Ask Me Anything: A simple strategy for prompting language models.

  91. Zhao, T. Z., Wallace, E., Feng, S., Klein, D., & Singh, S. (2021). Calibrate Before Use: Improving Few-Shot Performance of Language Models.

  92. Liévin, V., Hother, C. E., & Winther, O. (2022). Can large language models reason about medical questions?

  93. Mitchell, E., Noh, J. J., Li, S., Armstrong, W. S., Agarwal, A., Liu, P., Finn, C., & Manning, C. D. (2022). Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference.

  94. Shaikh, O., Zhang, H., Held, W., Bernstein, M., & Yang, D. (2022). On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning.

  95. Chase, H. (2022). Evaluating language models can be tricky. https://twitter.com/hwchase17/status/1607428141106008064

  96. Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Prentice Hall.

  97. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2022). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys. https://doi.org/10.1145/3560815

  98. Ding, N., & Hu, S. (2022). PromptPapers. https://github.com/thunlp/PromptPapers

  99. White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT.

  100. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models.

  101. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners.

  102. Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., & Zhou, D. (2022). Self-Consistency Improves Chain of Thought Reasoning in Language Models.

  103. Liu, J., Shen, D., Zhang, Y., Dolan, B., Carin, L., & Chen, W. (2022). What Makes Good In-Context Examples for GPT-3? Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. https://doi.org/10.18653/v1/2022.deelio-1.10

  104. Liu, J., Liu, A., Lu, X., Welleck, S., West, P., Bras, R. L., Choi, Y., & Hajishirzi, H. (2021). Generated Knowledge Prompting for Commonsense Reasoning.

  105. Sun, Z., Wang, X., Tay, Y., Yang, Y., & Zhou, D. (2022). Recitation-Augmented Language Models.

  106. Min, S., Lyu, X., Holtzman, A., Artetxe, M., Lewis, M., Hajishirzi, H., & Zettlemoyer, L. (2022). Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

  107. Nye, M., Andreassen, A. J., Gur-Ari, G., Michalewski, H., Austin, J., Bieber, D., Dohan, D., Lewkowycz, A., Bosma, M., Luan, D., Sutton, C., & Odena, A. (2021). Show Your Work: Scratchpads for Intermediate Computation with Language Models.

  108. Jung, J., Qin, L., Welleck, S., Brahman, F., Bhagavatula, C., Bras, R. L., & Choi, Y. (2022). Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations.

  109. Zelikman, E., Wu, Y., Mu, J., & Goodman, N. D. (2022). STaR: Bootstrapping Reasoning With Reasoning.

  110. Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., & Chi, E. (2022). Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.

  111. Mishra, S., Khashabi, D., Baral, C., Choi, Y., & Hajishirzi, H. (2022). Reframing Instructional Prompts to GPTk’s Language. Findings of the Association for Computational Linguistics: ACL 2022. https://doi.org/10.18653/v1/2022.findings-acl.50

  112. Logan IV, R., Balazevic, I., Wallace, E., Petroni, F., Singh, S., & Riedel, S. (2022). Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models. Findings of the Association for Computational Linguistics: ACL 2022, 2824–2835. https://doi.org/10.18653/v1/2022.findings-acl.222

  113. Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role-Play with Large Language Models.

  114. Li, G., Hammoud, H. A. A. K., Itani, H., Khizbullin, D., & Ghanem, B. (2023). CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society.

  115. Santu, S. K. K., & Feng, D. (2023). TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks.

  116. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2021). High-Resolution Image Synthesis with Latent Diffusion Models.

  117. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical Text-Conditional Image Generation with CLIP Latents.

  118. OpenAI. (2022). ChatGPT: Optimizing Language Models for Dialogue. https://openai.com/blog/chatgpt/. https://openai.com/blog/chatgpt/

  119. Brown, T. B. (2020). Language models are few-shot learners. arXiv Preprint arXiv:2005.14165.

  120. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback.

  121. OpenAI. (2023). GPT-4 Technical Report.

  122. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., … Fiedel, N. (2022). PaLM: Scaling Language Modeling with Pathways.

  123. Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ilić, S., Hesslow, D., Castagné, R., Luccioni, A. S., Yvon, F., Gallé, M., Tow, J., Rush, A. M., Biderman, S., Webson, A., Ammanamanchi, P. S., Wang, T., Sagot, B., Muennighoff, N., del Moral, A. V., … Wolf, T. (2022). BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.

  124. Yong, Z.-X., Schoelkopf, H., Muennighoff, N., Aji, A. F., Adelani, D. I., Almubarak, K., Bari, M. S., Sutawika, L., Kasai, J., Baruwa, A., Winata, G. I., Biderman, S., Radev, D., & Nikoulina, V. (2022). BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting.

  125. Lieber, O., Sharir, O., Lentz, B., & Shoham, Y. (2021). Jurassic-1: Technical Details and Evaluation, White paper, AI21 Labs, 2021. URL: Https://Uploads-Ssl. Webflow. Com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_ Tech_paper. Pdf.

  126. Wang, B., & Komatsuzaki, A. (2021). GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax. https://github.com/kingoflolz/mesh-transformer-jax

  127. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv Preprint arXiv:1907.11692.

  128. Tang, T., Junyi, L., Chen, Z., Hu, Y., Yu, Z., Dai, W., Dong, Z., Cheng, X., Wang, Y., Zhao, W., Nie, J., & Wen, J.-R. (2022). TextBox 2.0: A Text Generation Library with Pre-trained Language Models.

  129. Strobelt, H., Webson, A., Sanh, V., Hoover, B., Beyer, J., Pfister, H., & Rush, A. M. (2022). Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models. arXiv. https://doi.org/10.48550/ARXIV.2208.07852

  130. Bach, S. H., Sanh, V., Yong, Z.-X., Webson, A., Raffel, C., Nayak, N. V., Sharma, A., Kim, T., Bari, M. S., Fevry, T., Alyafeai, Z., Dey, M., Santilli, A., Sun, Z., Ben-David, S., Xu, C., Chhablani, G., Wang, H., Fries, J. A., … Rush, A. M. (2022). PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts.

  131. Wu, T., Jiang, E., Donsbach, A., Gray, J., Molina, A., Terry, M., & Cai, C. J. (2022). PromptChainer: Chaining Large Language Model Prompts through Visual Programming.

  132. Ding, N., Hu, S., Zhao, W., Chen, Y., Liu, Z., Zheng, H.-T., & Sun, M. (2021). OpenPrompt: An Open-source Framework for Prompt-learning. arXiv Preprint arXiv:2111.01998.

  133. Jiang, E., Olson, K., Toh, E., Molina, A., Donsbach, A., Terry, M., & Cai, C. J. (2022). PromptMaker: Prompt-Based Prototyping with Large Language Models. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491101.3503564

  134. Chase, H. (2022). LangChain (0.0.66) [Computer software]. https://github.com/hwchase17/langchain

  135. Liu, J. (2022). GPT Index. https://doi.org/10.5281/zenodo.1234

Edit this page
Word count: 0
Copyright © 2024 Learn Prompting.