Google AI Studio + Notebook LM

Hi,

I believe it would be incredibly valuable if:

  • **Notebook LM could understand and process visual content (images and videos) more effectively. This would allow users to include visual information within their research and knowledge base.
  • Google AI Studio could leverage the sources and understanding within Notebook LM, potentially including visual context. Furthermore, the ability for Google AI Studio (or a related tool) to analyze a user’s screen or video feed in conjunction with the information in Notebook LM would unlock powerful new use cases.

Specifically, this is what I want to do:

Imagine a user who has uploaded software documentation (text and images) into Notebook LM. They are now using that software and want step-by-step guidance on a specific task. If Google AI Studio could:

  1. Access the relevant documentation sources uploaded in Notebook LM.
  2. Simultaneously “see” the user’s screen (or a recording of it).

Then, Google AI Studio could provide highly accurate and context-aware instructions, referencing both the official documentation and the user’s current view of the software interface. This would be immensely helpful for learning new software, troubleshooting issues, and creating tutorials.

5 Likes

Impeccable :+1: :+1: :grinning_face: ```xml
Steadily improving.

1 Like

yup… just had a similar idea of building up LM for specific learning of devices/apps etc… then build something quick in AI studio with a voice chat bout to be there a sa specific helper with the program… for some reason Gemini can go on youtube, search the web…. And i can make my Notebook public, share it via URL …. yet i cant have it accessed via an app with gemini… seems VERY odd?

.

1 Like

I agree that it seems odd. I think it’s fair to keep all this in context that it’s a work in progress. This work that has been progressing at an amazing rate. I believe those building these amazingly powerful things are headed towards the things that seem logical in our minds.