Intermittent failure with FinishReason STOP

Greetings!
I’m using Google’s gemini 2.5 Pro and since the past week I’m seeing a lot of http 500 errors with not enough info to figure out whats going on. I’m not able to share the prompt info but you’ll see metadata about the sizes of data from the log below.
I appreciate any suggestions. Thanks!

Log snippet:
2025-08-20 16:05:50,906 - application.gemini_client - DEBUG - model: gemini-2.5-pro, temperature: 0.1, output max_tokens: 10000, prompt tokens: 1434 [gemini_client.py:38]
2025-08-20 16:05:53,554 - application.gemini_client - WARNING - Gemini API response.text is empty. Full response: sdk_http_response=HttpResponse(
headers=
) candidates=[Candidate(
content=Content(
role=‘model’
),
finish_reason=<FinishReason.STOP: ‘STOP’>,
index=0
)] create_time=None response_id=None model_version=‘gemini-2.5-pro’ prompt_feedback=None usage_metadata=GenerateContentResponseUsageMetadata(
prompt_token_count=2104,
prompt_tokens_details=[
ModalityTokenCount(
modality=<MediaModality.TEXT: ‘TEXT’>,
token_count=2104
),
],
thoughts_token_count=111,
total_token_count=2215
) automatic_function_calling_history= parsed=None [gemini_client.py:117]

Code snippet (using Python: google.genai.client.models.generate_content()):

  response = self.client.models.generate_content(
      model=model_name,
      contents=gemini_contents,
      config={
          "response_mime_type": "application/json",
          "response_schema": schema.model_json_schema(), 
          "max_output_tokens": max_tokens, 
          "temperature": temperature
      }
  )

@Ram_Vemuri ,

welcome to the community.

Thank you for reporting this. The engineering team is aware of the empty response issue and is actively working on a fix. We will keep you updated as the fix is identified and rolled out.

2 Likes

has this been fixed for 2.5 flash ? the last week i was using it and was getting the finish_reason=<FinishReason.STOP: ‘STOP’> frequently due to which i am forced to stay on 2.0 flash :frowning: