AI Studio Bug: Token counter incorrectly calculates Context History as Output Tokens

Subject: Bug: Token counter incorrectly calculates Context History as Output Tokens

Description:
I suspect the token counter logic is flawed in the UI. It seems to be accumulating ALL historical AI responses into the “Output tokens” count for the current turn, instead of classifying them as “Input/Prompt tokens”.

Evidence:

  1. My “Output tokens” count increases continuously with every turn, reaching impossible numbers (e.g., 7,000+) for short answers.

  2. My “Input tokens” count is suspiciously low (tens) or even NEGATIVE (see screenshot), suggesting the UI calculates Input = Total - Output, where Output is incorrectly inflated.

Expected Behavior:
Historical AI responses should count towards Input Tokens. Only the newly generated text should count towards Output Tokens.

This looks like a known UI discrepancy in Vertex AI Studio’s token accounting rather than an issue with the underlying model or API. The Studio interface currently aggregates all prior conversational context—including previous model outputs—into the “Output tokens” field when it should classify those as part of the input context for the current round. The server-side tokenization that the Vertex AI API uses is correct, so you’ll see accurate counts if you inspect the prompt_token_count and candidates_token_count fields from the API response in the Vertex AI API Explorer or via gcloud ai generativelanguage calls.

Until the UI fix rolls out, rely on the API metrics for billing or quota tracking and not the per-turn numbers shown in the Studio interface. If you need confirmation for your project’s token usage, you can also view detailed usage in Vertex AI > Monitoring > Usage in Cloud Console, which reflects the API-level token accounting rather than the Studio’s visual counter.

—Taz

thanks to your help.these incorrect count keeps me annoying.i used ai studio as my api provider and i depends on its dashboard(no actually token count for each talk and no output tokens)and token counter(ui error)

by the way, the negative token count happens when i using compare mode.in this count,it doesn’t look like a single ui bug?

can you show me the original topic about this known ui bug?

Hi @user114514

Thank you for bringing this to our attention.
Apologies for the delayed response. Could you please confirm if you are still facing the same issue?

unfortunately,it’s seems like that it is still remain unfixed

Hi @user114514, Thanks for your response
Your understanding is correct in the current token counter on AI Studio Input token is sum of all user inputs and output token is sum of AI responses. AI studio is just for experimentation and token counter are only cost estimations for output and input token. You can use usage and rate limit dashboard for token count and pricing.
Regarding the negative token count in compare mode. we are able to reproduce it. Will escalate this bug with our internal team.