Google on Tuesday announced a large number of artificial intelligence (AI) updates and new features in I/O 2025. At the same time, the company shared its long -term vision with AI, and the way it wants to prepare its current AI product line. Google Deep Mind’s co -founder and CEO, Dames Customer, highlighted new developments along with Project Eastra and Project Marine, as well as Tech Dev projects with Gemini Robotics. Google said he eventually wants to make a universal AI assistant.
Project is getting ready to select Morner users
In a blog post, the calculation highlighted the vision of the company to make a Universal AI assistant. This auxiliary is described as an “more common and more useful type of AI”, which can understand the user’s context, actively plan tasks, and by them, by them, on the devices. Although this is a long -term plan of Google Deep Mind, the project was taken with new capabilities in the project Austria and the project marinar.
The project is related to the real -time capabilities of the Austrian Gemini models. The first wave of these features is developed by the Gemini Live, which can now access a device’s camera and read the content on the screen in real time. The project has now updated the sound output in a more natural sound sound with local audio generation. In addition, it is also adding better memory and computer control capabilities.
In a demo shown during the key session of Google I/O 2025, the upgraded Geminis can directly express, intervene, and resume conversation from where it departs, and simultaneously perform multiple tasks in the background. Through computer control, he also called the business, scruced through a document, and found the web and found information.
These features are currently being tested by the company and will eventually be included in Gemini live, search AI format, and direct application programming interface (API). It will also be included in new form factors such as smart glasses.
The next project is Marine, which promotes agent’s capabilities in Gemini. It was launched in December 2024, and Google is searching for various research prototypes from the human agent system. The company also predicts a browser -based AI agent who can reserve reservations and book appointments in a restaurant.
Google said the project marineer now includes a system of agents that can complete 10 different tasks simultaneously. They can also be used to buy products and do research online. These latest capabilities are now being provided to Google AI Ultra users in the United States.
Developers using Gemini API will also have their computer use capabilities. In addition, Deep Mind also wants to bring these capabilities into more products later this year.
Gemini Robotics and World Model
During the key session, Google also talked about global models. It is primarily a very powerful Foundation AI model that has a deep knowledge of real -world physics and local intelligence. These models are considered ideal for robot training by imitation.
Google said it is using the Gemini 2.0 model for its Gemini Robotics Division, a platform for training and development of humanoid and inhumane robots. Currently, it is checking the platform with its reliable testers.


