Google on Tuesday showcases a number of new features for the Gemini 2.5 family of artificial intelligence (AI) models in Google I/O 2025. Mountain View -based Tech Dev introduced a better reasoning mode called a deep think, which is powered by the Gemini 2.5 Pro model. It also unveiled a new, natural and human -like speech called a native audio output, which will be available directly through the application programming interface (API). In addition, the company is also bringing a summary of thinking and thinking with the latest gymnasium model for developers.
Gemini 2.5 Pro Lemrina is on top of the Leader Board
In a blog post, Tech Dev described all the new abilities and features in detail that she would send the Gemini 2.5 AI model to the series in the next few months. Earlier this month, Google released the latest version of Gemini 2.5 Pro with better coding capabilities. The latest model is also in first position in the web Dio Arena and Lemarina Leader Boards.
Now, Google is improving the AI model with deep think mode. The new reasoning mode allows Gemini 2.5 Pro to consider several assumptions before responding. The company says it uses a different research technique than the thinking version of old models.
Based on internal testing, Tech Dev shared a benchmark score of argument mode in various parameters. Specifically, Gemini 2.5 Pro Deep Think has claimed that one of the most difficult benchmark tests of mathematics has been claimed to score 49.4 % at 2025 UAMOs. It also scores competitively at Livecodebench V6 and MMMU.
Deep thinks are currently under test, and Google says it is making a protective diagnosis and receiving input from protective experts. Currently, the reasoning method is only available for reliable testers through Gemini API. There is no word on the date of his release.
Google also announced the inclusion of new capabilities to the Gemini 2.5 flash model, which was released just a month ago. The company said the key standards of the AI model for reasoning, multilateral, code and long contexts have been improved, the company said. The company claims that more, it is even more efficient and uses 20-30 % less token.
This new version of Gemini 2.5 Flash is currently available in the preview of the developers through the Google AI studio. Enterprises can access it through the vertex AI platform, and individuals can find it in the Gemini app. In particular, this model will be widely available for production in June.
Developers who have access to API directly will now get a new feature with the Gemini 2.5 series of AI models. The company is introducing a preview version of the ancestral audio output, which can create more expression and in a manner like a human being. Google said that this feature allows consumers to control the style of accents, accents and speeches.
The initial version of the capacity comes with three features. The first is the affected dialogue, where the AI model can detect emotions in the user’s voice and respond accordingly. The second is an active audio, which gives the model background conversation and responds only when it is talked about. And eventually, thinking, which allows the generation of speech to answer Gemini’s ability to think verbally complicated questions.
In addition, the 2.5 Pro and Flash models in Gemini API and Vortex AI will also showcase their deliberate summary. These are primarily a model of the model’s raw thought, which was previously seen only in Gemini’s argument models. Now, Google will show a detailed summary, including the header, with each answer key details and information about the model operations.
In the coming weeks, developer Gemini will also be able to use the budget with 2.5 Pro. This will allow them to decide how many tokens the models eat before they respond. Finally, the Project Marine Computer Use Agent Function will also be included in API and soon to the Vertex AI.


