Nvidia is reportedly developing a brand-new processor specifically designed to make AI systems faster and more efficient, according to a report from the Wall Street Journal. This new chip will focus on “inference,” which is the process an AI uses to actually answer your questions.
The new hardware is expected to be unveiled at Nvidia’s big developer conference next month. In a surprising twist, the report claims the new system will incorporate a chip designed by a smaller startup called Groq. This move suggests that even the undisputed king of AI hardware is looking for new ways to stay ahead of the competition.
This development comes as Nvidia’s biggest customer, OpenAI, has been growing frustrated. The company behind ChatGPT is reportedly unsatisfied with the speed of Nvidia’s current hardware, especially for complex tasks like writing software code. Sources say OpenAI believes it needs a new type of chip to handle about 10% of its future computing needs.
OpenAI was reportedly in talks with startups like Groq to build these faster chips. However, Nvidia apparently swooped in and signed a massive $20 billion licensing deal with Groq, effectively shutting down OpenAI’s direct negotiations.
This all happens in the context of a massive and complex partnership between the two companies. In September, Nvidia pledged to invest up to $100 billion in OpenAI, a deal that gave the chipmaker a stake in the startup while giving OpenAI the cash it needed to buy its expensive hardware.
Now, it seems Nvidia is taking a more direct approach to solving OpenAI’s speed problem. By partnering with Groq, Nvidia is not only addressing its biggest customer’s needs but also absorbing a potential rival’s technology in the process.











