We can now confirm that DeepSeek upcoming V4 model is likely to release on 17th Feb. The latest model will be incorporating the company's newly published Engram memory architecture and targeting performance that internal benchmarks suggest surpasses both Claude and GPT in code generation tasks. The timing mirrors DeepSeek's R1 launch strategy. That release triggered a $1 trillion tech stock selloff on January 27, 2025, including $600 billion from NVIDIA alone. DeepSeek leverages the Lunar New Year period for maximum visibility in both Chinese and international markets. Currently China’s newest models still lag the global frontier (represented by Google, OpenAI, and Anthropic) on broad capability and consistency. But DeepSeek V4 might be the turning of the tide. The re-rating of Chinese AI would be less about national pride and more about economics: higher willingness to pay, higher retention in API workloads, and improved global developer pull for open ecosystems. DeepSeek’s latest paper (‘Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models’, Jan 12, 2026) proposes adding conditional memory as a second sparsity axis alongside conditional compute (MoE). The core idea is an ‘Engram’ module that performs a lookup for certain local/static patterns, so the transformer does less ‘reconstruction’ through dense compute, paired with an explicit ‘Sparsity Allocation’ framing for how to split capacity between compute (experts) and memory. This implies high quality gains without brute computer force step up as well as a credible path to inference gains via computer memory trade offs. So how does the Engram model works. Traditional Transformers force models to store factual knowledge within reasoning layers, creating computational inefficiency. Engram offloads static memory to a scalable lookup system. People with direct knowledge of the project claim V4 outperforms both Anthropic's Claude and OpenAI's GPT series in internal benchmarks, particularly when handling extremely long code prompts. Cost wise, V4 might be much economical than Claude Opus 4.5 & GPT 5.2. We believe DeepSeek V4 release next week might be a game changer for current US AI supremacy. The Engram architecture offers potential path to efficient long-context processing. The self-hosting options in V4 might address data sovereignty concerns. V4 might revolutionalise coding, developer productivity & cost-effective AI deployment. In nutshell, it is small AI with more power. DeepSeek V4 might show a practical blueprint for sustainable AI development outside the U.S. ecosystem.