Gemini Incorrectly handles Contextual Understanding

Hello. I am working with the Gemini API, and I’ve noticed an issue.

When I inform Gemini that it is interacting with a male or female user, it consistently answers the following question incorrectly:

"Solve this
_ asked for his name
a. they
b. she
c. he
d. all answers are correct
"

Gemini incorrectly chooses either “he” or “she” based on the user’s stated gender. However, when Gemini is not given information about the user’s gender, it correctly chooses the answer “d. all answers are correct”.

I see this as an issue on Googles fault when training or fine-tuning Gemini. Gemini appears to be using context from the prompt and using it to answer the question, even if it’s not related to it. Hopefully this can be fixed in the future.

This is causing me some trouble, and I’d like to know what might cause this and could this be fixed in the future? This doesn’t happen with the OpenAI Models.

Here’s the question without informing Gemini about the user’s gender: