![]()
Google is once again upping the ante when it comes to its AI-infused products and services.
It shouldn’t come as a surprise when I say that Google went all in on AI – again – at this year’s I/O developer conference.
Businesses worldwide are launching AI products and services almost on a weekly basis as the tech cements itself as an integral part of our interaction with technology. And companies like Google and OpenAI are at its forefront.
At the conference yesterday (20 May), Google announced that it is adding its Gemini AI assistant to Chrome, with the ability to clarify complex information and summarise content on a webpage, Google said.
The tech giant is also testing out a feature that lets users try on outfits virtually. The feature is a “first of its kind”, said Google, allowing users to choose between billions of options to try out.
The generation model for fashion “understands the human body and nuances of clothing”, it claims.
And there’s so much more. From AI-infused everything to prototype augmented reality (AR) glasses, here’s a rundown of some of the big announcements Google made.
Competitor to Sora
Veo 3 is an AI video generator that can create and incorporate AI-generated audio. Accepting both text and image prompts, the generator outputs in 4k and lets users add sound effects, ambient noise and dialogue.
The AI tool competes with OpenAI’s Sora, launched late last year, but one-ups the tool with its audio capabilities.
Yesterday, Google also launched Flow, an AI filmmaking tool made using Veo, Imagen and Gemini.
Veo offers “state-of-the-art” generative tools for video, while Gemini makes prompting intuitive and Images offers text-to-image capabilities, the tech giant explained in a blog post.
While still “early days”, Flow allows film makers to control shots and camera angles for its AI-generated scenes and build and edit existing shots.
Upgrades to existing projects
First unveiled at last year’s I/O conference, Project Astra was demonstrated using smart glasses to showcase the capabilities of real-time multimodal AI. Now, Astra will provide new experiences in Search and Gemini.
With the AI-mode, users can ask questions about what they are seeing through the smartphone’s camera. Astra streams the live video and audio into an AI model and responds to their questions.
Google hasn’t announced any launch details for its Project Astra AR glasses yet. Although, some reporters who tried out the Gemini-powered glasses say the company “might actually pull AR glasses off”.
While Project Mariner, also launched last year, is getting its own set of upgrades. The experimental AI agent that browses and uses websites, will now be brought to Gemini API and Vertex AI, all in Google’s growing attempts to incorporate AI tools as a vital part of our interaction with tech.
Sniffing out AI
If it’s not clear already, AI has seeped into most of the ways content is created, thanks in large part to many of these tech giants. And as generative AI tools get more advanced, it becomes harder and harder to detect its usage in content.
As a result, Google launched SynthID, a tool that embeds “imperceptible” watermarks on content made with Google AI. The company has now launched SynthID Detector, a verification portal to detect the use of Google AI across different modalities, including image, video, audio file or text.
According to the company, more than 10bn pieces of content have already been watermarked using SynthID.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


