Hi Dear Google AI Community,
**
I recently discovered a bug when chatting with Gemini Pro. I hope that the info below will help you to improve the model!
Description:**
When uploading a large JSON file (e.g., experimental logs containing multiple arrays of floats and nested dictionaries) and asking the model to analyze the results, the model enters a failure state. Instead of outputting the analysis, it gets trapped in an internal generation loop, eventually terminating with an empty output, a [NO CONTENT FOUND] error, or an abrupt cutoff.
Steps to Reproduce:
-
Start a chat session using the Gemini API (Pro) or the web interface with file upload capabilities.
-
Upload a dense JSON file (e.g., 500+ lines of nested lists, floats, and dictionaries representing machine learning loss histories).
-
Prompt the model with a request requiring it to read and interpret the numerical data within the JSON (e.g., “Do these results make sense based on standard ML theory?”).
-
Observe the model’s generation process.
Observed Results:
The model stalls during generation. It either loops its internal thinking process indefinitely until hitting a safety/timeout limit or returns a glitchy response such as [NO CONTENT FOUND, or "Done, bye, goodbye, okay, go, end"] without answering the prompt. In my case, I had to manually terminate the model and start fresh.
Expected Results:
The model should parse the JSON contents, extract the relevant data points, and generate a natural language response analyzing the numerical trends without timing out.
