Google didn’t wait long to follow up on its November release of Gemini 3 Pro. Just a month later, the tech giant is rolling out Gemini 3 Flash to the public. This new version aims to be efficient and fast while keeping the “pro-grade” smarts of its bigger sibling. Google claims it offers high-end reasoning at a fraction of the cost, positioning it as the ideal tool for everyday tasks.
The performance numbers tell an interesting story. Gemini 3 Flash jumps significantly ahead of Google’s previous generation, including Gemini 2.5 Pro. But the real headline is how it stacks up against the competition. When OpenAI rushed out GPT-5.2 to counter Google, they likely didn’t expect a “Flash” model to challenge it so closely.
In the difficult “Humanity’s Last Exam” benchmark, Gemini 3 Flash trailed GPT-5.2 by less than a single percentage point. This specific test blocked access to tools like web search, relying purely on the models’ raw intelligence. Flash didn’t just keep up; it actually won in specific categories. In MMMU-Pro, a test focused on multimodal reasoning (like understanding images and text together), Google’s efficient model scored 81.2 percent. That score tops GPT-5.2’s result of 79.5 percent.
While benchmarks aren’t everything, these results put pressure on OpenAI. A lighter, efficient model trading blows with a competitor’s heavyweight “Extra High” reasoning mode is a serious development.
Google is making sure people actually use it, too. They are establishing Gemini 3 Flash as the default model for both the Gemini App and the AI Mode in Search. This move gives global users immediate free access to the Gemini 3 architecture.
Alongside the new language model, Google added a visual update for users in the United States. The chatbot now integrates “Nano Banana Pro,” the company’s newest image generator. US users can find this feature by selecting “Thinking with 3 Pro” in the model picker and choosing “Create Images Pro,” bringing high-end image creation directly into the conversation stream.











