Using Gemini API to extract information from videos

I am interested in using the Gemini API to extract information from videos using the Gemini-1.5-flash model. I have a few questions:

  1. How can I save a trained GenerativeModel API model so that I can call it directly next time?

  2. I am frequently encountering the following error when using the API:

429 POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?%24alt=json%3Benum-encoding%3Dint: Resource has been exhausted (e.g. check quota).

However, I have checked my Cloud quota and I am not yet at the 1500 RPD limit.

  1. Sometimes I encounter the following error when processing the same video:

400 POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?%24alt=json%3Benum-encoding%3Dint: Please use fewer than 3600 images in your request to models/gemini-1.5-flash.

However, this error does not always occur for the same video.

I would appreciate any help that you can provide.

Welcome to the forums!

I’m not sure exactly what you mean by this. Are you saying that once it has read in the video and created the tokens for that, is there a way that you can have it reference this in the future?

Exactly what are you hoping this will do?

Keep in mind that once you upload the video, if you’re using the File API, it will remain there for 48 hours at the same reference.

You can also look into using a context cache which will let you set up the context in advance and each time you reference it you’ll save time and some token costs.

How are you checking your quota, exactly?

How long (in seconds) is your video? Are you including multiple videos?