Gemini api either returning read timout error, generates nothing, or only gives me a 1.4 or a half of what i need

Ive been using gemini to correct long recordings (long .mp3 file goes in, whisper tiny.en model transcribes, gemini 2.5 flash comes in and cleans up the text, usually providing a really good transcription). When inputing a 2 hour long file, with around 20k+ words (a little bit less than 28000 tokens including prompt), it goes through for around 4 minutes and 15 seconds and gives the error ‘NoneType’ object has no attribute ‘strip’ (which i believe means no text outputted) , or sometimes provides only half or 1/4 of the transcription.

Another 1 hr 38 minute long file used to complete perfectly fine (around 17000ish tokens), and now also suffers the same fate with half the text being cut off.

I ran on AI studio with the same model, with the same transcription from whisper, with the exact same settings (temp of .1, grounding with google search on, all safety features turned off), and got a partially finished transcript first, and a full transcript after a rerun.

I do not know why this could be, im wondering if this may be because of the thinking taking up a lot of tokens. The final token count was within 250k, at only 79000 tokens with the prompt thinking and response according to ai studio.

My setup is quite literally just a python script on my computer thats running.

Another error im getting is that the Read operation timed out. I tried to increase the timeout but thats not working.

Any help is appreciated. I will be running the code a few more times, this time with a thinking budget.

It is sending a request, some serverside errors occur, and a lot of tokens are being sent (input)

response = client.models.generate_content(
model="gemini-2.5-flash",
contents=f"{correction_prompt}", 
    config=types.GenerateContentConfig(
    http_options=types.HttpOptions(
        timeout=600000000  
    ),
    safety_settings=[
        types.SafetySetting(
            category=types.HarmCategory.HARM_CATEGORY_HATE_SPEECH,
            threshold=types.HarmBlockThreshold.BLOCK_NONE,
        ),
        types.SafetySetting(
            category=types.HarmCategory.HARM_CATEGORY_HARASSMENT,
            threshold=types.HarmBlockThreshold.BLOCK_NONE,
        ),
        types.SafetySetting(
            category=types.HarmCategory.HARM_CATEGORY_CIVIC_INTEGRITY,
            threshold=types.HarmBlockThreshold.BLOCK_NONE,
        )
    ]
    )

)

Hello,

Welcome to the Forum!!

Could you please share your prompt with me so that I can try to reproduce your issue?