Home
News
Business Stories AI Technology Travel Visa Asia Business Registration Telecommunication Medical Services
About Us
Home News AI Technology DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

545    2026-02-16

DeepSeek open-sourced V3.5 (671B MoE), setting new state-of-the-art on 1M+ token long-context Chinese and English benchmarks, with native tool-calling and improved multilingual reasoning, making it one of the strongest open-weight models for enterprise long-document processing.

Previous article
OpenAI o4-mini Pro Launches with Breakthrough Chain-of-Verification Reasoning
Next article
OpenAI Releases Policy Recommendations for the AI Era!
new
OpenAI Releases Policy Recommendations for the AI Era! Japan Deploys Physical AI Robots for Labor Shortage! 2026 AI Infrastructure & Agentic Boom! Tesla Optimus Production Line Accelerates! Claude AI Auto-Applies to 700+ Jobs & Gets Hired! Tesla Optimus Humanoid Robots Hit Factory Lines! Pika Launches Real-Time Video Chat for AI Agents! Anthropic Source Code Leaks 510,000 Lines, Next-Gen Mythos Exposed Tesla Admits Robotaxi Needs Remote Human Control, FSD Myth Fades OpenAI Completes Record $122B Funding, Valuation Hits $852B
Email subscription
About
Navigation
News
©bizyet.com