Today, we are excited to introduce the Seed2.0 series, a system-level optimization of our foundation models designed to support massive-scale production environments. Building on a 500x growth in daily token usage, the model is designed to retain core LLM and VLM capabilities while extending them toward complex task execution in the real world. The series emphasizes a balance between high-performance reasoning and cost-effective deployment through a flexible lineup of Pro, Lite, Mini, and Code models.
- State-of-the-Art Multimodal Understanding. Seed2.0 delivers top-tier performance in visual, spatial, and motion reasoning. The model demonstrates leading capabilities on benchmarks such as MMSIBench, MotionBench, and VideoMME. It features enhanced temporal perception for video, enabling stable analysis of dynamic environments, real-time flow analysis, and interactive guidance for scenarios requiring active feedback.
- Enhanced Reasoning and Long-Horizon Execution. The model significantly strengthens multi-turn instruction following, tool usage, and structured output stability. It excels in complex reasoning tasks—including STEM and Math benchmarks (IMO, FrontierSci)—and is optimized for agentic workflows. Seed2.0 supports iterative "plan-act-reflect" cycles, allowing it to autonomously handle deep research, data synthesis, and continuous tool orchestration over long contexts.
- Latency- and Cost-Aware Inference. Addressing the constraints of interactive deployment, the series provides tiered options to balance inference depth and latency. Despite delivering performance on par with top-tier industry models, Seed2.0 reduces token costs by approximately one order of magnitude, making massive-scale reasoning and long-context agent operations economically viable.
Call for Bad Cases: If you have encountered any cases where the model performs poorly, we would greatly appreciate it if you could share them in the issue.
The Seed2.0 cookbook is designed to help you start using the Seed2.0 API with diverse code samples. Our flagship Seed2.0 has been deployed on Volcano Engine. After obtaining your API_KEY, you can use the examples in this cookbook to rapidly understand and leverage the diverse capabilities of our Seed2.0.
- Cookbook for Code Agents
- Cookbook for Search Agents
- Cookbook for Multimodal Search Agents
- Cookbook for MCP Tool Use Agents
- Cookbook for Thinking with Images Agents
- Cookbook for 2D Grounding
- Cookbook for 3D Understanding
- Cookbook for Video Understanding
This repo is under Apache-2.0 License.
About ByteDance Seed Team
Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.
