Google I/O 2024
It’s tech conference season. Yesterday was the main Google I/O keynote and, in case you haven’t heard, Google is working on AI.
As usual, The Verge has a nice video breakdown of the most important announcements. (I find sitting through Google keynotes tedious so this was helpful.)
Just a few overall thoughts from the presentation:
-
Google Lens gets video. Taking a video of a computer screen of code, and having Google explain the code is very interesting.
-
The Gemini features within Google Workspace look incredible. Creating sheets from a list of emails and and analyzing data across many sources will be very useful.
-
So many announcements, naming conventions, and code names. Astra, Veo, Gemma, Gemini, Gems, SynthID, yikes. Hard to decipher and remember what each is for, and what the difference is unless you work at Google on one of these teams.
-
I find it very interesting to hear the launch timing around each of the announcements. The “later this year” and “in the coming months” timelines really speak to unfinished and potentially reactionary features. The features that are shipping now are the most interesting to me and shows what Google has actually prioritized over the past year.
-
Nice bit at the end where they use Gemini to count the number of times the word “AI” has occurred in the keynote. (121)
In contrast to the OpenAI Spring Update from Monday, it seems like all of the demos from Google were very scripted and clearly pre-recorded. I understand why given the nature of this tech and its unpredictability, but the pre-recorded demos feel less real and more contrived.
Google has a long history of announcing unfinished work at I/O that often doesn’t even ship to users. I’ll be excited to see which of these announcements make it into shipping products for customers this year. Some very cool ideas and features here, now it’s time to deliver them.