
480 trillion currencies were processed by Google’s AI items this month, which is a 50 percent increase over the previous year, according to CEO Sundar Pichai in a series of announcements made during the Google I/O presentation immediately in Mountain View, California. The Vertex AI and Gemini software have more than 400 million active users each month, according to Pichai, and over 7 million designers have used the Gemini AI in Google AI Studio.
Most of Google’s Artificial use comes from AI Overviews in Search, he stated. Google is attempting to occupy the popular AI assistant market this year, from AR glasses that can identify things in perspective to collapsing the differences between conceptual AI and search engines.
According to Andreesen Horowitz, ChatGPT had 400 million regular effective clients in January 2015, of which 175 million use the mobile app.
The best of Google’s products don’t come low: Gemini membership plans will now be split into a Pro and Ultra program. The$ 19.99 AI Pro program includes a collection of goods and stricter pricing requirements than the free edition. The most expensive ($ 249.99 ) Ultra plan includes the highest rate cap and early access to products like the upcoming Gemini 2.5 Pro with deep reasoning, as well as the full suite of AI products like the moviemaker Flow.
AI Mode Search Experience more melds seek vehicles with conceptual AI
Google Search will start offering an AI Mode button that outsources internet searches to Gemini, pulling data from various sources to answer multiple-part questions starting nowadays. Pichai described AI Mode as” the complete rethinking of search” at the I/O presentation.
More people are asking longer issues of Google Search since the addition of AI Summarizes, he said. This shift to longer queries could lead to a steady social shift in how web users interact with the web, considering the types of queries relational AI are most adept at instead of keywords.
Starting on May 20, AI Mode will be free in the US, with a continuous deployment set to come in weeks.

At Google I/O, vice chairman of seek Liz Reid predicted that “many of AI Mode’s cutting-edge features and capabilities will be incorporated directly into the main Search Experience.”
Essentially, Google is now saying merely AI can successfully do what its search engine when did: score the whole web.
If a user chooses to link apps like Gmail, AI Mode may also chain up searches and provide personalized recommendations. Google suggests that AI Mode could be used to access regional services, restaurant reservations, searching, and more in one place.
Google Search may also connect to the modal AI Project Astra. Users can talk to and point the camera in AI Mode or Lens to receive search-related answers about the planet.
Gemini 2.5: Updated includes heavy logic.
A logic method for Gemini 2.5 Pro, Deep Think, is on its approach. In contrast to the base model, Deep Think takes longer to acquire a more in-depth answer than the competitor reasoning model. It will be accessible to a limited testers only before the open launch at an unknown time in the Gemini API.
” We’re taking more time to conduct more border security assessment and get suggestions from security professionals”, said Google DeepMind CEO Demis Hassabis.
Notice: A new study found that 55 % of business leaders now regret firing employees as a result of relational AI.  ,
Fresh from winning the 2024 Nobel Prize in Chemistry for the AI type AlphaFold2, Hassabis announced both Gemini 2.5 Flash and 2.5 Pro may include updated versions available this summers. Both exhibit better efficiency and performance, according to Google. Additionally, the Gemini Live API then supports local audio-out dialogue and audio-visual input, which can capture subtle languagenuances, including frightening whispers audible at I/O. Plus, both types have been hardened against fast treatment episodes.
Users will be able to modify 2.5 Pro with thinking expenses that were previously only available with 2.5 Display to limit the amount of currencies the AI you use.
The AI associate Jules is leaving Google Labs and entering public alpha now for coding. An sequential programming broker with GitHib integration, Jules you work on real code bases – a good way to issue in a developer’s overall project, as long as the AI doesn’t present any flaws.
Additionally, Google developed a frontier type called Gemini Diffusion that incorporates the technique known as “refine down from strange sound” when producing words in video and audio. In all Gemini designs, propagation should ultimately lead to a lower delay. Serious users can sign up for a queue for a demonstration.
Gemini Life enhancements, Imagen 4, Veo 3, and more are added to the Gemini game.
Google is also looking for an AI associate that can easily incorporate into any aspect of life, similar to the science fiction type. The company’s strategy is to make the Gemini application more “personal, strategic, and powerful”, said Google Labs VP Josh Woodward. As a result, the company is introducing a number of new features for the Gemini application, some of which will be under the Project Mariner logo and be accessible with the Gemini Ultra license. Google does allow Gemini to interact with websites to make reservations, for instance.
With the idea of an AI associate comes opt-in toggles for connecting Gemini, Google, and other software so the helper can tailor its reactions to your interests or even use your diction in generated emails.
Another I/O announcements involving AI assistants included:
- The Gemini apps and iOS nowadays offer free Gemini Live display sharing.
- Deep Research will let you post your personal records. Immediately, Google Drive and Gmail will soon be accessible for data extraction.
- Information can be converted into tests, infographics, or web pages using Canvas.
- The Imagen 4 picture father is available in the Gemini software starting now, with improved word generation capabilities.
- Veo 3, the most advanced video model, is available today. It includes dialogue, background sound, sound effects, and native audio generation.
The Gemini Live camera and screen sharing feature is out on Android OS today.
The team demonstrated live translation at Google I/O via Google Meet. With just a slight delay from the original words, the AI voice spoke as a translator in both English and Spanish. AI translation is available in Google Meet starting today, in English and Spanish, for subscribers. In the coming weeks, there will be additional languages, and Enterprise accounts will launch later this year.
Google also unveiled Beam, a video model that converts 2D video scenes to 3D “light field” displays for more accurate images. The first Beam-compatible devices will be available later this year in collaboration with HP.
Benefits from Gemini 2.5 upgrades include Gemini Code Assist.
In the world of coding, Gemini Code Assist for Individuals, formerly in public preview, is now generally available and linked to Gemini 2.5. Gemini Code Assist for Git Hub, a code review agent, has also transitioned into general use. On Gemini 2.5, both the free and subscription versions operate.
SEE: Google told some workers to return to the office three days a week or risk their jobs.  ,
Flow produces AI-generated videos with consistent, editable scenes.
Google announced several video tools for professional filmmakers. The capabilities of Veo, Imagen, and Gemini are combined in Flow, a new tool for creatives. Prompts can use sound to create fully animated scenes. Filmmakers can insert individual items and other elements into scenes, lengthen or shorten shots, and perform other editing tasks entirely through the prompt window. A giant chicken carrying a car into the sky and leaning into the dreamlike atmosphere of many AI-generated videos was the subject of the demonstration at Google I/O.
In the US, Google AI Pro and Google AI Ultra plans are compatible with Flow.
Google shows off Android XR glasses and partnerships
Finally, Google demonstrated AI commands on the headsets and glasses for Android XR. In a demonstration, the company demonstrated how Gemini and smart glasses can set up a meet-up at a coffee shop, provide directions, pose questions about walls, and recall items from earlier in the day. A live demonstration of translating between three languages – with English as an intermediary language both speakers understood – worked with a slight delay for one round of back-and-forth talk but stalled as the conversation continued.

The Android XR system will also be used on Samsung’s Project Moohan, coming later this year. Moohan is a virtual reality headset with an “infinite screen” that combines smartphone functionality with TV.