The Chinese AI startupdipidic is rapidly pushing the World AI race. The company has just released the DEPC-R 1-0528, once again proved that it is a boot to see. The powerful update is already challenging rivals like Openi’s GPT -4O and Google’s Gemini.
The new version provides the major benefits of complex reasoning, coding and performance in logic, which are areas where even advanced models often stumble.
With the requirements of its open source license and lightweight training, depicting is faster and smart.
You can like
A jump in the performance of the benchmark
🚀 Depstek-R1-0528 is here! 🔹 Improved benchmark performances 🔹 Better supports reduce front and capabilities JSON output and function calling. pic.twitter.com/kxcgfg9z5lMay 29, 2025
In recent benchmark tests, DEPSEC-R 1-0528 achieved 87.5 % accuracy in the AIME 2025 tests.
This is a remarkable jump from 70 % of the previous model. It also significantly improved the Livecodebench coding benchmark, which increased from 63..5 % to 73.3 %, and doubled its performance on the notorious difficult “humanity’s final exam”, increasing from 8.5 % to 17.7 %.
For those who are unfamiliar with these benchmark tests, basically, they suggest that the DiPsic model can retain its Western rivals in specific domains in some cases, and in some cases.
Open source and easy to make
(Image Credit: Pexels)
Unlike Openi and Google, which protects their best model behind APIs and Paywalls, deep sack is keeping things open. The R1-0528 is available under the MIT license, which gives the developers the freedom to use, edit and deploy them.
The update also includes assistance for JSON output and function calling, which facilitates the construction of plugin apps and tools directly in the model.
This open approach not only appeals to researchers and developers, but also makes DiPsic a fast attractive option for startups and companies that want to replace closed platforms.
Trained smart, not hard
(Image Credit: Norphoto / Getty Images)
One of the most impressive aspects of the rise of Deep Sak is how effective these models are being built. According to the company, the previous version was trained at about 2,000 GPUs at a cost of $ 55.58 million in just 55 days, which usually costs to train models in the United States.
This focus on resource -efficient training is a key distinction, especially when the cost of large language models and the carbon maps continues to be examined.
What does this mean for the future of AI
The latest depressic release is a sign of changing dynamics in the world. With strong reasoning capabilities, transparent licensing, and faster development cycle, DiPsic is positioned as a serious competitor to the industry heavyweights.
And since the global AI becomes more multi-dimensional of landscape, models like R1-0528 can not only play an important role in forming AI, but who is to build, to overcome and benefit from it.
More from Tom Guide
Back to the laptop
Bai Price (Minimum) Price (Less) Product Name (A to Z) Product Name (Z To A) Retailer Name (A to Z) Retailer Name (Z To A)


