Slow/Incomplete model response from API

I am using the gemini API in langchain. My workflow used to take around 40 seconds to answer something but now its taking longer than before.

Also, two times the response abruptly stopped mid way despite no restrictions on the max tokens.

Am I the only one facing these issues?
Is this a problem from Google’s end? I have been observing these issues ever since the overcharging problem happened earlier this week.

Hi @Vansh_Maurya
We believe we’ve fixed this and have rolled out changes. Please let us know if you continue to face this issue.
Thank you