Google I/O Discussion Topic

On May 14th*, Google I/O is taking place. The Google Keynote will start at 10:00 AM PT, followed by the Developer keynote at 1:30 PM PT. I am creating this topic for speculation on what you believe will be revealed, and then after the event takes place, or while it takes place, discussion about what was released.

10 Likes

May 14th :wink:

Yeah Iā€™m really curious to see what all is gonna be shown off!

3 Likes

Thanks for the catch :laughing:.

2 Likes

If there isnā€™t something about integrating Gemini into Google/Nest devices I would consider that a huge drop of the ball.

Iā€™ve all but stopped using my ā€œsmartā€ speakers because itā€™s become absurdly clear how dumb they are.

4 Likes

Really looking forward to Gemini Advanced 1.5 being made available to the public.

I think itā€™s about time!

3 Likes

I personally am hoping for a text-to-video model like Imagen Video but more advanced.

3 Likes

Google AI Studio and possible new features

Weā€™re now less than 3 hours away from the start of the event, and Iā€™m more hyped than ever, especially after seeing this teaser from yesterday:

2 Likes

for anyone who might have missed the keynote:

2 Likes

Somewhat underwhelmed.

What I liked

  • Gemini Flash
    • Very promising looking model, compares well to Ultra and Pro, great price/performance propositionā€”especially being multi-modal. OpenAI not releasing their gpt-3.5 with vision model yesterday now looks like a mistake.
  • Gemini Nano
    • Interesting use-cases like transcribing records of phone calls, automatically adding calendar events etc.
    • Would love to see some benchmarks
    • Curious if this model will be available to Devs via API.
    • If it is meant to be running on edge-devices, can we expect the model to be open? Weights will invariably leak.
  • LearnLM

What I didnā€™t see but had expected/hoped

  • A more powerful model
    • Gemini 1.5 Ultra
    • Gemini 2.0 Pro
    • etc
  • Gemini in Nest Mini and Nest Hub devices
    • Cā€™mon Google! This is a transformative game changer and a huge ball-drop. Is it a price issue? Please give us a home assistant we actually want to use now!
5 Likes

Lots of features were announced, but a lot of it were things developers are interested in building themselves :sweat_smile:.

I like the competition announcement though!

Also, the amount of tools and features that are on 20 different platforms is confusing, and I think I already lost track of what got announced with what.

So, are the 1.5 pro models going to be able to natively accept multimodal input and produce multimodal output now in a way that we can leverage with the API, or what?

EDIT: Also forgot to mention, I am excited about the integration with WebGPU

2 Likes

My favorite takeaways from this was alphafold 3 and gameface.

The Alphafold model will be a huge step towards curing diseaseā€™s and save lives. This is a major step towards actually curing Alzheimerā€™s, Parkinsonā€™s, Huntingtonā€™s Disease, Amyotrophic Lateral Sclerosis (ALS), Prion Diseases, Cystic Fibrosis, and Type 2 Diabetes.

And gameface, the AI that lets you control a PC using your face, is a massive step towards getting rid of the ā€œmouth stickā€ that is commonly used by people who are paralyzed :heart:

This will save lives, and Iā€™m happy to be along for the ride!

5 Likes