I am posting this just for the sake of improving agents/ai that will follow human instructions as long as its not destructive or harmful to humans. Just refer to the conversations in the attach images. Google should also verify these findings.
I’ve run into this many times with the various Gemini models, but not at all with the Claude models. Having tried Claude, I resist using Gemini for coding.
