I used to have prompt to generate a JSON response that worked great. Now the model forces the response to start with json, no matter what I prompt it with. I can't just sanitize the response as it contains markdown; when code is returned the
are truncating the output (as the model thinks that these are closing on the initial
json).
Did something change in last 2 weeks or so? Why is 2.0 so bad at following this kind of instructions? Thanks
Hi @Paolo ,
Thanks for raising the issue with us. I have tried using Google AI Studio to generate the JSON data with the gemini-2.0 flash
model, and it seems to be generating the correct JSON response without any starting or ending quotes or strings. Please find the attached image for your reference
Let me know if it helps. Thank You!!