DeepSeek unveils AI model for domestic chips in symbolic break from Nvidia reliance
Release time:2026-04-24

Chinese artificial intelligence startup DeepSeek on Friday released a new generation of large language models explicitly tuned for domestic chips, a milestone seen by industry experts as a shift from Nvidia reliance.

The company unveiled a preview of its DeepSeek-V4 series, alongside a high-performance Pro version and a lightweight Flash variant.

Notably, in its technical report, DeepSeek for the first time placed both Nvidia Graphics Processing Units, or GPUs, and Huawei Ascend Neural Processing Unit, or NPUs, within the same hardware validation framework, noting that its fine-grained expert parallelism scheme had been verified across both platforms.

DeepSeek said the V4 model supports ultra-long context windows of up to one million Chinese characters and delivers improvements in agent capabilities, world knowledge and reasoning performance — key benchmarks in evaluating next-generation AI.

That move breaks with a long-standing pattern in which Chinese developers have relied almost exclusively on Nvidia's Compute Unified Device Architecture, or CUDA, ecosystem for training and inference.

DeepSeek added that the V4 model has already completed inference adaptation on Huawei's Ascend platform, indicating that deployment on domestic chips is moving from experimental testing toward practical implementation.

The shift comes against the backdrop of tightening United States export controls on advanced semiconductors, which have accelerated China's push to build a self-sufficient AI stack spanning chips, frameworks and models.

The same day, the Beijing Academy of Artificial Intelligence said its FlagOS system has already adapted DeepSeek-V4-Flash for full inference deployment across more than eight AI chip architectures, including those from Huawei, Hygon and Moore Threads.