A while back, Google changed how the chain of thought of the models worked, to only make summaries of what the model is actually “thinking” available to users and the API. This caused some backlash, and there was a thread where a few Google employees replied (I’m paraphrasing here) that they were committed to making improvements to the CoT to make it more reliable and to expose more. This is a quote directly from Logan Kilpatrick:
”In the long term, as models do more in the reasoning steps (tool use and otherwise), I can easily imagine that raw thoughts become a critical requirement of all AI systems given the increasingly complexity and need for observability + tracing.”
My question is, since then, have any meaningful improvements been made, and what were they? Are there any plans to bring the chain of thought back in its entirety?