As expected, Google’s Pixel 9 event was full of Gemini AI features. Google didn’t just announce new Gemini functionalities. Google said those features will be available on the Pixel 9, Pixel Watch 3, and Pixel Buds Pro 2 in the coming weeks. You’ll have to wait for these devices to hit stores. That was an obvious hit at Apple and the iPhone. Apple Intelligence will roll out in phases starting at some point this fall.
Google also offered a big teaser at the end of the keynote of what’s coming to Gemini in the coming months. Google detailed a few exciting features that are not available from other genAI providers. This puts pressure on OpenAI to roll out a new ChatGPT upgrade very soon.
Recent reports detailing the next big ChatGPT upgrade already tease that OpenAI might be working on features similar to Google’s plans for Gemini. Even Sam Altman posted a ChatGPT teaser on X, suggesting the next big upgrade might be close.
Rick Osterloh delivered Google’s “one more thing” announcement that closed this week’s press event. That’s when he revealed that Gemini Live is part of Project Astra, Google’s multimodal AI assistant demoed at I/O 2024.
Gemini Live lets you converse continuously with the AI via voice in a manner that mimics human interactions. The feature is now available for Gemini Advanced subscribers.
In the future, it’ll receive support for seeing what you see through your camera. It’ll also be able to look at your screen and use contextual information it sees.
OpenAI demoed its own Voice Mode for GPT-4o a day before Google had a chance to show Project Astra to the world in May. Voice Mode just rolled out to a limited number of ChatGPT Plus users. It’s coming to more users later this year.
That wasn’t the only exciting teaser in Osterloh’s closing speech. He revealed that Google is “evolving Gemini to be even more agentive, to tackle complex problems with advanced reasoning, planning, and memory. So you’ll be able to think multiple steps ahead, and Gemini will get things done on your behalf under your supervision. That’s the promise of a true AI assistant.”
If that sounds familiar, it’s because you might have seen the rumors detailing the next big ChatGPT upgrade. Called Q* or Strawberry internally, some see this big upgrade as the GPT-5 model coming to OpenAI’s chatbot. Reports say the chatbot will get more advanced reasoning capabilities, allowing it to plan multiple steps ahead.
Moreover, the upcoming ChatGPT model might be even better at researching the web for you. Unsurprisingly, that’s what’s coming to Gemini too.
Osterloh said, “Gemini will be able to assist as your researcher, saving you tons of time by using information from across the web to create a research report that’s tailored to your exact questions. What used to take you hours now takes minutes.”
I’ve been using ChatGPT for that. However, researching the web with OpenAI’s chatbot won’t always produce the results I want. I need to keep tweaking my prompts and occasionally correcting the chatbot.
Rumors about the GPT-5 upgrade, or whatever OpenAI calls it, might have convinced Google to tease these particular Gemini features during the Pixel 9 launch event. But the Gemini research features that Osterloh teased don’t seem to be vaporware. He offered an example of Gemini being given a complex task and producing a detailed report in a Google Doc. The report will also feature sources from the web.
It’s unclear when these Gemini capabilities will roll out. But it’s very clear that the Pixel 9 event wasn’t just about hardware. Google wanted to show off its Gemini capabilities right now, before Apple unveils the iPhone 16 series and before OpenAI announces the next big upgrade for ChatGPT.
You can watch Osterloh’s remarks from the Made by Google keynote below: