NVIDIA just proved once again why it sits at the absolute top of the artificial intelligence hardware market. The tech giant recently became one of the first companies to submit results for a massive, highly respected industry speed test called MLPerf Inference v6.0. When the final scores came out, Nvidia completely crushed the competition. The company proudly announced that its hardware delivered the highest performance among all competitors.
The secret weapon behind this massive victory is Nvidia’s newest hardware, the Blackwell Ultra. In a recent official blog post, the company explained exactly how it pulled off such an incredible win. NVIDIA engineers credit their success to strict “co-design laws.” This means they build the physical computer chips and the software that runs them simultaneously, ensuring everything works together perfectly. According to Nvidia, this intense teamwork allowed them to achieve the highest possible data processing speeds while simultaneously offering the lowest cost per task.
The results from the MLPerf tests show just how far ahead Nvidia actually is. When looking at the specific tests used to train new artificial intelligence models, Nvidia’s hardware performed nine times faster than its closest rival. This massive performance gap clearly highlights the huge infrastructure lead the company currently holds over the rest of the technology sector. For companies building massive AI data centers, these numbers make Nvidia the obvious choice.
The team behind the MLPerf testing suite completely updated their software for version 6.0. The new tests now support some of the most complex AI models currently available, such as DeepSeek-R1, GPT-OSS-120B, and Mixtral 8x7B. The updated benchmark also throws heavy workloads at the hardware, testing how well the chips handle dense language models, generative recommenders, and programs that mix computer vision with human language. These tests reflect the real, everyday tasks that modern businesses actually need their AI to perform. NVIDIA CEO Jensen Huang often calls MLPerf one of the most intense and stressful benchmarking suites in the world.
Interestingly, the test results highlight something very important about Nvidia’s strategy. The massive lead in processing speed does not come entirely from the physical hardware. A huge portion of Nvidia’s advantage comes from brilliant software optimization. For example, Nvidia submitted its first test results for the complex DeepSeek-R1 model a few months ago. Since that very first test, the company has managed to reduce processing time by 2.7 times without changing a single piece of the physical hardware. This proves that Nvidia continually updates its software to squeeze every last drop of power from its chips.
When you do look at the physical hardware upgrades, the generational leap is just as impressive. NVIDIA compared its newest Blackwell Ultra setup against its previous generation GB200 NVL72 hardware. During the strict v6.0 testing, the new Blackwell chips ran an incredible 2.77 times faster than the older models. This kind of consistent, massive speedup generation after generation proves that Nvidia is not slowing down its hardware innovation.
NVIDIA also loves to point out that it often stands completely alone during these difficult tests. Last year, the company claimed it was the only hardware maker brave enough to submit DeepSeek-R1 results to the MLPerf judges. With the release of version 6.0, the testing process became even more difficult and put the hardware under much heavier scrutiny. Despite the tougher grading scale, the new Blackwell Ultra easily maintained Nvidia’s massive lead.
This willingness to openly test its hardware makes Nvidia highly respected among software developers. The MLPerf testing suite is famously brutal and unforgiving. Because the tests are so hard, many other custom chip manufacturers, and even major rivals like AMD, simply refuse to participate as extensively as Nvidia does. By continually subjecting its chips to these harsh tests, Nvidia proves to its customers that it offers the best possible value for their money when building large, expensive data centers.











