The thinking model, unfortunately, has very strange priorities. When working with code, I fully understand that the model will not always succeed. When I, as a user, tell it what it did wrong, the model exceptionally falls into a mania of apologizing. You can trace its reasoning, and it devotes most of it to apologizing to the user and figuring out how to do it in a wonderful way. This is extremely annoying and unprofessional. I have the impression that DeepSeek3 doesn’t do this, or at least not to such an extent that it becomes particularly irritating.
If the model devotes a lot of attention to apologizing, it is even less likely to correct the parts where it made mistakes.
I don’t know if other users need this, but personally, at least when working with code, the model should be 100% focused on the code. Without praising the user for good ideas, and without apologizing when it makes mistakes.
The thinking model just makes the trained behavior more transparent, all of them do it. And to your point, using a significant portion of the thinking budget figuring out how to apologize doesn’t help with writing better code, which ultimately is what developers care about.
Oh, thanks for the link. It seems I wasn’t the only one who felt that way. One can only hope that the next version will have this aspect reduced by default.
The instances where the model apologized excessively occurred when I used it without system instructions. I understand that these instructions will address this issue. I simply believe that the model should, by default, exhibit less of this behavior in technical contexts.