"finishReason" : "MAX_TOKENS" - But Text is Empty

Here text is empty but giving finish reason as MAX_TOKEN

{
  "candidates" : [ {
    "content" : {
      "parts" : [ {
        "text" : ""
      } ],
      "role" : "model"
    },
    "finishReason" : "MAX_TOKENS",
    "index" : 0
  } ],
  "usageMetadata" : {
    "promptTokenCount" : 25090,
    "totalTokenCount" : 25090,
    "promptTokensDetails" : [ {
      "modality" : "TEXT",
      "tokenCount" : 25090
    } ]
  },
  "modelVersion" : "models/gemini-2.5-flash-preview-04-17"
}

I’ve mentioned maxOutputTokens as 65536 in request

But still why its completing even within 25K???

2 Likes

Hey @Yasar_Arafath
Welcome to the community!
We are aware of the issue. We will look into it and provide an update here.
Appreciate your patience!
Thank you!

any updates? @Sangeetha_Jana

1 Like

@Yasar_Arafath
Thank you for your patience. We currently don’t have any updates to share at this moment. Rest assured, we will provide you with any new information as soon as it becomes available.
Thank you!

1 Like

Facing a similar issue, the model will start spitting our the same token or the same series of tokens. In this case, it’s whitespace that just repeats until it maxes out, but I’ve had other issues with base64 starting to repeat on the output

chunk: candidates=[Candidate(content=Content(parts=[Part(video_metadata=None, thought=None, code_execution_result=None, executable_code=None, file_data=None, function_call=None, function_response=None, inline_data=None, text='                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             ')], role='model'), citation_metadata=None, finish_message=None, token_count=None, avg_logprobs=None, finish_reason=<FinishReason.MAX_TOKENS: 'MAX_TOKENS'>, grounding_metadata=None, index=0, logprobs_result=None, safety_ratings=None)] model_version='models/gemini-2.5-flash-preview-05-20' prompt_feedback=None usage_metadata=GenerateContentResponseUsageMetadata(cached_content_token_count=None, candidates_token_count=1698, prompt_token_count=4856, total_token_count=11364) automatic_function_calling_history=None parsed=None