Claude Can Now Write and Run Code

October 27, 2024

5 minutes

🟢easy Reading Level

Hey there!

Welcome to the latest edition of the Learn Prompting newsletter.

The big news this week is Anthropic just released Computer Use! It allows an AI to control your computer—moving cursors, clicking, typing! All you have to do is type a prompt, and the AI does the rest.

Claude Sonnet 3.5 uses computer use for coding. Claude "sees" the screen, interprets visual data (like screenshots), and makes actions based on instructions. For example, it could open files, browse websites, or handle software that requires human-like interaction. The latest version is still in beta, but it looks super promising! With more developers testing and refining it, it's only getting better.

What is it?

This is also called AI Agents, the AI models that can perform sequential actions and use tools. The idea gained traction back in 2023 when tools like BabyAGI and AgentGPT took over Twitter. Feels like ages ago, right? Now, the biggest GenAI players like Anthropic are making agentic workflows part of their core products.

Why it's important?

The main answer: AI Agents broaden the range we can address by typing a line of text – prompting. But there's more to it:

  • Workaround for LLM Limitations: AI agents can cover LLM weaknesses, like limited memory or gaps in specialized knowledge.
  • Task Automation (Obviously!): They can automate repetitive tasks, saving time and boosting productivity.
  • Tool Integration: By integrating with different tools and APIs, AI agents bring LLMs into real-world workflows.
  • NVIDIA CEO Jensen Huang hopes NVIDIA will become a company with 50,000 employees and 100 million AI assistants working together.
  • Lenovo introduced Lenovo AI Now, an AI agent designed to transform traditional PCs into personalized AI devices.
  • Microsoft introduced 10 autonomous AI agents in Dynamics 365.
  • Last week, we mentioned OpenAI Swarm: an experimental framework for building multi-agent systems led by Shyamal Anadkat, the co-instructor of our ChatGPT for Everyone course.

Exciting times ahead! 2023 was all about LLMs, and now it's time for AI Agents and Multimodal models.

Other GenAI Market Updates

  • Adobe launched Firefly's new video generation capabilities
  • Amazon Ads launches new AI tools
  • Perplexity introduced Internal Knowledge Search and Spaces
  • Google announced a new NotebookLM feature that lets you customize the Audio Overview using a text prompt
  • Meta partnered with Blumhouse–a driving force in horror– and creators to test Meta Movie Gen
  • AWS and Google signed the deals to create nuclear micro-reactors to power their data centers (AWS source, Google source)

10 Predictions for 2025 and Key Takeaways from the State of AI Report 2024

The State of AI Report 2024 was recently released as a 200+ slides-long presentation. This dense and insightful document covers research and industry trends, politics, safety, and predictions for 2025.

We summarized 10 predictions for 2025 for you:

  • $10B+ Sovereign Investment triggers national security review
  • No-code App Success goes viral in the App Store
  • Data Collection Reforms after legal trials
  • Softer EU AI Act implementation due to overregulation concerns
  • Open-source Model Surpasses OpenAI o1 in reasoning benchmarks
  • NVIDIA's Dominance Continues with no significant market challenges
  • Humanoid Investment Declines due to product-market fit struggles
  • Apple's On-device AI drives personal AI assistant momentum
  • AI-generated Research Paper accepted at a major ML conference
  • AI-driven Video Game achieves mainstream success

We also collected the key takeaways in one article. Read the Key Takeaways

Other Great Resources about GenAI

NEW: AI/ML Red Teaming & AI Safety: Live Cohort đź”´

In 2023, we partnered w/ OpenAI, ScaleAI, & HuggingFace to run HackAPrompt, the largest AI Safety competition ever!

Taxonomical Ontology of Prompt Hacking Techniques

A Taxonomical Ontology of Prompt Hacking techniques. Source: "Ignore This Title and HackAPrompt"

Today, HackAPrompt has been cited & used by teams at OpenAI, Amazon, and nearly every AI security company.

Many of our winners have gone on to be hired as AI Red Teamers by leading AI security companies—one of the newest career fields created as a result of Generative AI.

Essentially, an AI Red Teamer is responsible for attacking AI systems to trick these models into outputting harmful information. This work is essential for building safer AI models, and requires an understanding of Prompt Hacking, Prompt Injections, and vulnerabilities of an AI System.

Sander Schulhoff speaking at an AI Security conference

AI Red Teaming and AI Security Masterclass

We created this live, cohort-based course to teach you everything we know about AI Red-Teaming so you can deploy safer GenAI models or transition into a new career as an AI Red Teamer!

There's not much time left to sign up as we're starting the cohort next week. So if you're on the fence, now's the time to decide!

Latest Research on Prompting (October 14–20, 2024): Advanced Prompting Techniques

  • Layer-of-Thoughts Prompting (LoT): A method that introduces constraint hierarchies to filter and structure LLM-based responses for better information retrieval.
  • Buffer of Thoughts (BoT): A reasoning technique that stores and reuses "thought templates" to guide LLMs, with a dynamic buffer manager updating the process as tasks evolve.
  • Supervised Chain of Thought (CoT): A prompting method that applies task-specific supervision to guide LLMs more effectively through the prompt space.
  • Stepwise Correction (StepCo): A technique that iteratively verifies and revises incorrect steps in the reasoning process of LLMs.

We recently updated the Advanced Prompting section of the Prompt Engineering Guide. Check it out to find more promoting techniques!

Other Research

  • Mistral releases new AI models for laptops and phones, les Ministraux: Ministral 3B and Ministral 8B
  • Zyphra releases Zamba2-7B outperforming models like Mistral-7B, Gemma-7B, and Llama3-8B
  • A new version of Open-Sora Plan, a project aiming to open-source reproduce the closed-source OpenAI Sora model, was released
  • Apple researchers published GSM-Symbolic, a new study and benchmark of LLMs' reasoning capabilities

Thanks for reading this week's newsletter!

If you enjoyed these insights about AI developments and would like to stay updated, you can subscribe below to get the latest news delivered straight to your inbox.

See you next week!

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.


© 2025 Learn Prompting. All rights reserved.