DeepSeek just rolled out version 3.1 of its language model. It’s faster. It’s supposed to be smarter. And, most importantly, it’s designed to work with Chinese-made chips. That sounds great on paper—especially with U.S. sanctions still choking China’s access to top-tier processors. But here’s the thing: DeepSeek still can’t fully cut ties with Nvidia. Not yet.
Training their most advanced model, R2, ran into problems when they tried doing it on Huawei’s Ascend chips. So, like before, they went crawling back to Nvidia for training. The only part they ran on Chinese hardware? Inference—basically, the model’s performance after all the real heavy lifting is done.
V3.1 does have some new features. It uses FP8—an 8-bit floating-point format that makes processing cheaper and faster. And it has a “hybrid inference” system, which lets users flip a switch between normal and “deep thinking” mode. The second mode uses more resources for better quality output. Sounds clever. But it also sounds like a band-aid until better chips or training options are in place.
They’re also tweaking their API pricing starting in September. No surprises there. Companies want better margins, and AI isn’t cheap. Still, the timing feels like damage control more than a growth move.

The Chip Problem They Don’t Want to Talk About
The biggest story here isn’t the model itself. It’s what it says about China’s AI hardware problem. DeepSeek—like many other Chinese AI firms—is stuck between ambition and reality. On one hand, the government’s pouring money into chip development. On the other, actual working chips that can train frontier models just aren’t ready yet.
Ascend chips are the main hope right now. But their track record is shaky. And even DeepSeek isn’t ready to commit. They never said which domestic chips V3.1 supports, just that it’s “optimized” for Chinese processors. That’s vague. It tells us they’re hedging. Probably for good reason.
Training large models needs serious hardware. If the chips glitch or lag, the entire process fails. That’s exactly what happened with R2. And now, they’re scrambling to stay in the race.
While DeepSeek tries to get hardware sorted, rivals are catching up. Alibaba’s Qwen3 is moving fast. Baidu’s open-sourcing of Ernie gives it a broader audience and maybe a bigger developer base. DeepSeek might’ve led the pack for a moment. But that gap’s closing fast.
What to Really Take From This
Let’s be honest: V3.1 isn’t a revolution. It’s an upgrade, yes. But mostly, it’s a signal that DeepSeek’s trying to adapt to hardware limits they can’t control.
They want to make it sound like they’ve figured out a way to run everything on Chinese chips. But they haven’t. Training still needs Nvidia. That’s the reality. Until they can do the full cycle—training and deployment—on domestic hardware, they’re still dependent.
That doesn’t mean this release is worthless. Far from it. It shows DeepSeek’s trying to get leaner and more efficient. FP8 could matter a lot down the road. And the two-mode system adds flexibility that power users might appreciate.
But let’s not pretend this is some big leap forward for China’s AI independence. It’s a half-step. Necessary, maybe. But small.
The real story is unfolding quietly, in labs and factories. Until someone in China can build chips that train top models at scale, every “AI breakthrough” will come with an asterisk.
