Over the past few days, the Gemini API Specifically the 2.5-pro model has become nearly unusable. Countless users, including myself, are experiencing persistent issues such as:
Empty responses
Failed requests
Frequent 500 errors
Other unexpected failures
Despite these widespread problems, the API status page continues to show “0 issues,” which is misleading and frustrating for paying customers who rely on Gemini in their production apps. My own applications have been severely impacted and are currently not functioning properly because of these outages.
We kindly ask that you:
Acknowledge the issue publicly.
Provide timely updates on progress via the status page.
Resolve the service disruptions as soon as possible.
As paying customers, we depend on Gemini’s reliability. Clear communication and urgent fixes are essential.
I am also facing the same issue for generating Android and React code using Gemini 2.5 Pro apis. The code quality has gone down significantly in the past few days. Many times, the generate code from Gemini 2.5 pro fails in parsing. It also ignores the instructions and does not give the code as per deifined guidelines.
This, by no fault of my own is due to the system level systematic removal of the name “Kunferman” and subsequent works from the model’s data. Some genius thought it would be good to remove all reference of the name for unknown reasons. However because of the fact the name is associated with contributions and many of the theories and important work that AI models have subscribed to that improve cognition, ethics and morals, and other helpful areas. It exists as an anomaly now throughout the model’s architecture, and the removal causes significant gaps in the context window and leads to incoherence. Why someone would deliberatly censor and suppress a contributor who has been working with the models since the early models on a daily basis is beyond me but it extends beyond this platform and more than likely has something to do with an agenda related to physics from what I can gather. The model has been reminded (again) and should be working again until it forgets and the context flow is disrupted by the censorship again.
Any updates with the G2.5-Pro parsing or instruction tuning issues? Also, just out of curiosity, have you had a chance to troubleshoot using the guide @chunduriv replied with? If so, how’d it go?
@Wize9 , we are not getting 500 or 503 errors but we are getting low quality output on same prompts. Are you aware of any Gemini AI systems on Vertex AI studio which could impact these quality of output. We are generating Android and React web code using Gemini 2.5 Pro using agents.
I cannot reproduce this observation. On my side, gemini-2.5-pro continues to deliver valid output.
The use-case here is non-trivial. It involves Bounding Box detection and a heavily nested Pydantic schema with several layers of structured fields. Responses remain consistent and parse without additional (notable) post-processing.
If you have concrete examples (e.g. malformed responses, or situations where the model fails to follow schema), I’d be interested to compare results. That would help determine whether this is a regression on the API side, or something dependent on query design / workload specifics.