As many of you have likely noticed, Google AI appears to be intentionally restricting the generation length across all its models, providing users with minimal and severely incomplete responses.
The continuous slashing of output limits has reached an unacceptable breaking point. It is now nearly impossible to receive a comprehensive report or code generation exceeding 500 to 600 lines. Instead of full answers, we are constantly hit with this frustrating roadblock: *“The model’s generation exceeded the maximum output token limit.”
*
I am canceling my Google Ultra plan today!
