Home
News
Business Stories AI Technology Travel Visa Asia Business Registration Telecommunication Medical Services
About Us
Home News AI Technology DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

251    2026-02-16

DeepSeek open-sourced V3.5 (671B MoE), setting new state-of-the-art on 1M+ token long-context Chinese and English benchmarks, with native tool-calling and improved multilingual reasoning, making it one of the strongest open-weight models for enterprise long-document processing.

Previous article
OpenAI o4-mini Pro Launches with Breakthrough Chain-of-Verification Reasoning
Next article
Mistral Mathstral 2.0 sets new open-source SOTA on MATH and GSM8K-Hard
new
Mistral Mathstral 2.0 sets new open-source SOTA on MATH and GSM8K-Hard Anthropic Claude 4.4 Opus demonstrates 7-day autonomous research agent on novel scientific hypothesis xAI Grok-5 preview adds native causal video generation with physics consistency OpenAI o4-proto-3 achieves closed-loop self-correction on 100+ cycle software debugging tasks Mistral Devstral: Specialized 24B Model Optimized for Developer Tool-Chain Integration Meta Llama 4 Maverick: First Open Model to Reach Human-Parity on SWE-Bench Verified xAI Grok-4.1 Adds Native Video Understanding and Temporal Action Localization DeepMind AlphaGeometry 2 Solves 92% of IMO Geometry Problems Autonomously OpenAI o4-proto-2 Demonstrates Self-Evolving Prompt Engineering Loop Mistral Codestral-Mamba: State-Space Model Achieves SOTA Efficiency on Long-Code Completion
Email subscription
About
Navigation
News
©bizyet.com