Pricing for gemini-2.5-flash image generation is a cruel joke on devs

So, nano-banana model is AWESOME!

I can think of so many ways in which my users will benefit from this SOTA model. Now I want to implement it in my code.

I want a decent rate limit and don’t mind paying.

But what are the charges?

Thus begin the woe of a developer developing on “Google AI”.
So I google for the price and end up on this page

image

So the pricing is determined based on output tokens. Fair enough.

But then you stumble on this page:

Okay, a bit vauge so as I understand min cost is 1290 tokens (even if I’m generating a tiny icon)? Why?

So you run an experiment to find out the answer.

But the Google AI Studio “Gemini API Usage” dashboard is a cruel joke.

Total API Requests per Day - nice
Input Tokens per Day - cool
Requests per Day - okay

but wait, wheres “Output Tokens per Day”.

Are you serious? Surely this is a glitch. You refresh frantically.

Nothing.

Nada.

No info on output tokens.

Okay but wait you have the Google Cloud Billing Dashboard. Surely that must hold the answers.

You head there only to find out data is refreshed only every 24 hours / 36 hours / god knows how long.

Sigh.

But wait - you heard there is support in OpenRouter for the new image generation.

So you head there and run a quick test. And there you have it. Glaring at you.

The same model, the same company, the same request - being charged 5x more depending on the library being used.

How does this even make sense.

The best models in the word (check)
Release the best models (check)
Developer experience (massive fail)

On the plus side, atleast I found a reliable provider for the Google models. HINT - it’s not Google.