Right before the new year, the AI world got a surprise. DeepSeek, a Chinese AI company, released a paper about a new training method. They call it ‘Manifold-Constrained Hyper-Connections,’ or mHC. The idea is to train large AI models, like language models, without needing massive computing power.
This is a big deal because training big AI models usually costs a fortune. It takes huge clusters of GPUs or specialized chips. Most people assumed only the largest companies could do it. DeepSeek is proving otherwise.
Last year, DeepSeek shook things up with their model R1. It was supposed to be similar in capability to OpenAI’s o1. But the cost of training it was way lower. That caught the attention of US tech companies. It showed that you don’t need billions to compete in AI. You can compete with smart engineering.
Now, the mHC technique could be the foundation for their next model, R2. R2 was expected in mid-2025 but got delayed. There were a few reasons. Cutting-edge AI chips are hard to get in China. And DeepSeek’s CEO Liang Wenfeng had concerns about performance.
Also Read: Japan’s Next Frontier in AI: “Physical AI” Takes Shape with Digital Twin Platforms
Even so, the mHC paper shows that DeepSeek is still pushing the limits. If R2 uses mHC, it could make AI training much more efficient. It could let smaller teams build advanced models without needing the huge budgets of the US or Europe.
DeepSeek has been consistent in proving that engineering can beat sheer money. First R1, now mHC. It’s a reminder that the AI race is not just about funding. Clever techniques can open doors that money alone can’t.
If this method catches on, it could change how the whole AI ecosystem works. Smaller developers could finally play in the same league as the big tech giants. And we might see faster progress without the astronomical costs that have kept AI development concentrated in a few countries.
No one knows exactly when R2 will come out. But the mHC research gives a hint about what DeepSeek is thinking. They are trying to make AI more scalable and accessible. And if they succeed, the rules of the AI game could shift again.

