Random Endless \n Output in Gemini API 1.5 Pro Responses

I’m encountering a random issue when using the Gemini API 1.5 Pro. Occasionally, the API starts outputting endless newline characters (\n) until it reaches the configured max_output_tokens limit. This behavior occurs inconsistently and appears to happen during the same API call that usually works fine.

Details:
temperature = 1.0
top_p = 0.9
top_k = 10
max_output_tokens = 8192
model_name = models/gemini-1.5-pro-latest

{
“3. XX”: {
“3.1 XXX”: “Major Shareholders and Holdings:\n\nAs of December 31, 2023, CC AG held a 74.16% stake in XX AG. …\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n…”
}
}

Hi @Daniel_Khuny , If possible, could you provide the prompt so that I can reproduce on my end?

Also experiencing this issue!!! Please look into this google :pray:

Hi @Jack_Kirby sometimes this happen with large language models when using high variable setting, you can do one thing try to reduce the temperature and top_p value and provide more explicit instructions in your prompt about the desired output format. Hope it’ll help.

Hello! I have been having this same issue. For me, the way to try to “jump over it” has been to implement a number of retries, and eventually one of evey 3-5 is alright. Hopefully they look into it, because i am trying to migrate from ChatGPT to Gemini but this makes it 100% unusable

EDIT: for me the error comes from using the gemini-2.0-flash model. I have tested different top k and temp, it seems like an arbitrary bug. It may be worth noting that this is for an Spanish client and I am making the model work in Spanish