Whats the point of 2m context if length constraints is condensing output?

its very annoying that gemini wants to condense or truncate when it is explicitly instructed not too. How can anyone write anything long when it truncates?

1 Like

Hi @Robert_Redding ,
Which Gemini model is being used here? And
Are responses being truncated due to model type or token handling?
Thanks!