Sometimes it also generates ellipsis. I have included the prompt instructions but still same problem.
Hey,
Hope you’re keeping well.
The tokens you’re seeing (\u00b7, \u2019, and similar) are UTF‑8 escape sequences that represent typographic punctuation such as middle dots, smart quotes, and ellipses. Gemini 1.5 and 3 model families output Unicode by default, and depending on how your environment or client library decodes the streaming response, these characters may appear escaped instead of rendered. In AI Studio and the Generative Language API, the model returns text in JSON, so if your viewer or code is not decoding UTF‑8 correctly, you’ll see those escape codes.
To resolve it, ensure you decode the content field of the response as UTF‑8 text before display, or explicitly unescape JSON output in your client. When testing in AI Studio, copying results via the “Copy raw JSON” option will show the escaped sequences, while the main chat view renders them correctly. If you use the Vertex AI SDK for Python, confirm that your console or notebook cell uses UTF‑8 encoding (PYTHONIOENCODING=utf-8) so punctuation and ellipses render as intended.
Thanks and regards,
Taz
No, these are occurring in AI Studio chat box. This time i got only few of them but sometimes they are 100+. Moreover 2nd problem is ellipses, there are ellipses all over the place. Both screenshots attached below.
Hi @JatinK_Innovision , Could you please provide the exact prompt instructions you are using to prevent the ellipses and tokens?



