Zoom Advances Small Language Models for Agentic AI Applications

March 25, 2025

3 minutes

🟢easy Reading Level

Zoom has shared its progress in the development of small language models (SLMs) as part of its strategy to prepare for what it terms "the agentic AI era." According to a recent technical briefing by Zoom's Chief Technology Officer, Xuedong Huang, the company has developed SLMs that achieve state-of-the-art performance within the 2 billion parameter category on public benchmark leaderboards.

What Are Agentic AI Systems?

Agentic AI represents an evolution beyond traditional large language models (LLMs) that simply respond to prompts. These systems are designed to function as autonomous agents capable of performing complex, multi-step tasks with minimal human supervision.

According to Zoom's technical documentation, their agentic AI systems are characterized by four key capabilities:

  • Reasoning and planning: The ability to analyze situations and devise strategic approaches to problems
  • Memory and reflection: Learning from past interactions to improve future performance
  • Action execution: Utilizing appropriate tools to implement decisions and achieve goals
  • Multi-agent collaboration: Coordination between multiple specialized AI agents to accomplish complex tasks

Technical Specifications and Benchmarks

Zoom reports that its new SLM was trained on 6 trillion tokens of multilingual data using 256 Nvidia H100 GPUs over approximately 30 days. The company has published benchmark comparisons showing how its SLM performs against other models.

In the 2 billion parameter category, Zoom claims its SLM achieved superior scores on several standard benchmarks including MMLU, MMLU-Pro, GPQA, and BBH when compared to other similarly-sized models. These benchmarks evaluate models on multiple-choice questions across various subjects, STEM problems, domain-specific expertise, and complex reasoning tasks.

While acknowledging that uncustomized SLMs generally underperform larger models like OpenAI's GPT-4o-mini on general tasks, Zoom emphasizes that its SLMs show significant improvements when customized for specific applications.

The Federated Approach to AI

Zoom's AI strategy centers around what it calls a "federated approach," which involves coordinating multiple specialized models rather than relying on a single large model. This approach reportedly offers several advantages:

  • Models can be optimized for specific tasks using domain-relevant data
  • Smaller, specialized models enable faster processing and more efficient scaling
  • Reduced computational requirements lower development and operational costs

The company demonstrates particular success in machine translation tasks. After adapting its SLM with 11.5 billion tokens of translation-focused data, Zoom claims improved performance across 14 language pairs including Chinese, English, French, Japanese, Portuguese, and Spanish according to COMET-22 quality metrics.

Practical Applications

These developments are being positioned to support Zoom AI Companion, the company's suite of AI features. Zoom indicates that SLMs customized through its forthcoming AI Studio will enable more efficient AI agents that can achieve results comparable to more resource-intensive LLMs at lower cost.

For enterprise customers, this could potentially translate to more cost-effective AI solutions that maintain acceptable quality levels for specific business applications.

Industry Context

Zoom's focus on smaller, specialized models follows a broader industry trend of exploring alternatives to increasingly large and computationally intensive foundation models. As organizations seek to implement practical AI solutions, concerns about computational requirements, operational costs, and performance bottlenecks have prompted interest in more efficient approaches.

This announcement comes as Zoom continues to expand its AI capabilities beyond its core video conferencing platform. The company has been progressively integrating AI features into its products since the introduction of AI Companion in 2023.

Zoom has not provided specific timelines for when these new SLM capabilities will be available to customers through its AI Studio or AI Companion features.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.


© 2025 Learn Prompting. All rights reserved.