Skip to content

AI Enhancement: DeepSeek Boosts AI Model to Compete with ChatGPT, Gemini

Upgrade Announced by Chinese AI Company DeepSeek: Improved Chatbot Offers Enhanced Logic, Mathematical Abilities, and Coding Skills, Reducing Inaccurate Responses.

Artificial Intelligence development firm, DeepSeek, enhances AI model in a bid to match the...
Artificial Intelligence development firm, DeepSeek, enhances AI model in a bid to match the capabilities of ChatGPT, according to recent news alongside the update on the expansion of Gemini.

AI Enhancement: DeepSeek Boosts AI Model to Compete with ChatGPT, Gemini

DeepSeek, a Chinese AI company, has unveiled DeepSeek-R1-0528, an advanced update to its reasoning model. This new version boasts improved reasoning and inference functions, making it a formidable competitor in the rapidly evolving AI landscape.

In the AIME 2025 test, DeepSeek-R1-0528's accuracy has significantly increased from 70% in the previous version to 87.5%. This leap is attributed to the model's ability to process more tokens per query, with the average number of tokens per question increasing from 12,000 to 23,000.

The model's size is impressive, with 671 billion parameters, requiring 715GB of disk space. However, a 1.66-bit quantized version reduces this to a more manageable 162GB, making it accessible for users with GPUs of 24GB VRAM or machines with 192GB RAM. This quantized model maintains state-of-the-art performance with minimal accuracy loss.

DeepSeek-R1-0528 also offers improved function calling and coding assistance, reducing hallucination rates, and contributing to a better user experience in practical tasks. Inference speed and deployment have been optimized, with peak throughput of 334 tokens/sec, outperforming other deployments by roughly 32 tokens/sec while scaling well at high batch sizes.

Comparisons with other top-tier AI systems like OpenAI's O3 and Google's Gemini 2.5 Pro show that DeepSeek-R1-0528 is competitive, especially in complex reasoning tasks that require extended token context and sophisticated inference optimization.

The new DeepSeek-R1-0528 model is a testament to the rapid pace of innovation in artificial intelligence, reshaping expectations across various industries. As users and developers gain access to more refined AI tools, the broader ecosystem benefits through improved efficiency, new capabilities, and fresh opportunities for innovation.

However, it's important to note that Microsoft has banned the DeepSeek app for staff due to data and propaganda risks. Additionally, a bill proposed by Hawley seeks to cut US-China AI ties and impose jail time in relation to DeepSeek.

Looking ahead, the focus will likely shift to how well these systems perform in diverse, high-stakes settings and whether they can truly meet the evolving demands of global users across multiple domains. The launch of DeepSeek's R1 chatbot in January drew widespread attention across the AI sector and underscored China's growing presence in the field.

As always, readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.

[1] DeepSeek-R1-0528 Model Documentation [2] AIME 2025 Test Results [3] Inference Optimization Report [4] Comparative Analysis with GPT-5 and Other Models [5] DeepSeek-R1-0528 Release Notes

  1. The AIME 2025 test results reveal that DeepSeek-R1-0528's accuracy improvement is partly due to its ability to process more tokens per query, with the number increasing from 12,000 to 23,000.
  2. In the magazine's comparative analysis with GPT-5 and other models, DeepSeek-R1-0528 demonstrates competitive performance, particularly in complex reasoning tasks that require extended token context and sophisticated inference optimization.

Read also:

    Latest