Gemini 2.0 Flash has a weird bug

Aha I figured it out.

So here’s my theory, the reason markdown tables aren’t working well is here is because at a a lower temperature, the llm will pick the highest probable token. And with markdown tables, Google must have been trained on tables with longer headers—which would make sense because it’s always the 2nd or 3rd header that the error begins. Usually the first column is an id (so its short), but the second column is usually longer to the length of the longer text.

So what’s going on here is that when the llm predicting the next token, the most probable token to come next is going to be a space. But the problem is that the LLM doesn’t know how long the text inside the table (since it can’t predict n+2). So it gets stuck in a loop of prioritizing a space.

In my case, I was using Vercel’s AI SDK which by default sets the temperature to 0. Gemini 1.5 had a temp range from 0-1, while Gemini 2 has a temp range of 0-2. Explicitly setting the temperature to 1 solved my problem. But anything less than 1 triggers it.

It’s definitely a bug, but I doubt it’s fixable since it’s been trained on it. If you need a temperature less than 1, you’ll just have to wait until fine-tuning is available.

1 Like