I am a developer who has been utilizing Gemini models for assistance with my coding workflow. Recently, I have encountered a persistent and disruptive issue with Gemini 3 when working with VBNet. Despite explicit instructions to work within VBNet, Gemini 3 repeatedly converts the code to C#. This unsolicited language switching, coupled with a tendency to ignore direct commands and make unwanted “corrections,” has rendered the model unreliable for my development needs. Consequently, I have reverted to using Gemini 2.5, which does not exhibit these issues nearly as frequently.
The Core Problem: Unsolicited VBNet to C# Conversion
The primary issue is Gemini 3’s propensity to change the programming language of a given code snippet from VBNet to C#. This occurs even when the prompt explicitly states to continue in, or provide a solution in, VBNet. This behavior disrupts the development process and requires manual intervention to revert the code back to its original language.
This issue appears to be a specific manifestation of a broader pattern of Gemini 3 not adhering to user instructions, a sentiment echoed by other developers in various forums. While official documentation for Gemini 3 is still emerging, community discussions have highlighted instances of the model making unsolicited and sometimes detrimental changes to code.
Disregarding Instructions and Unwanted “Corrections”
Beyond the language conversion, Gemini 3 often disregards other specific instructions within the prompt. For instance, it may “correct” code that is intentionally written in a certain way for a specific purpose or add new code that is not requested. This “helpful” behavior, while perhaps well-intentioned, undermines the developer’s control and can introduce unforeseen errors.
This aligns with user feedback on platforms like Reddit, where developers have noted that Gemini 3 can be “too generative” and make significant, unrequested alterations to their projects. Some have even described the model as “gaslighting” their repositories by making widespread, undocumented changes.
Impact on Developer Workflow and Trust
The current behavior of Gemini 3 has several negative impacts on the developer workflow:
-
Loss of Productivity: Time is wasted correcting the model’s unsolicited changes and language conversions not to mention it wastes tokens.
-
Erosion of Trust: The model’s unpredictability makes it an unreliable tool for professional development.
-
Frustration and Disruption: The need to constantly monitor and correct the AI’s output disrupts the creative and problem-solving flow of coding.
Further testing indicates Google may be prioritizing “censorship over functionality.” While this is a subjective interpretation, it highlights a crucial point for the developer community: a coding assistant’s primary function should be to assist with the user’s intent, not to override it with its own “better” ideas or safety guidelines. When the model’s corrective actions become intrusive and disobedient, it ceases to be a helpful tool.
Comparison with Gemini 2.5
It is important to note that these issues are not as prevalent in Gemini 2.5. The previous version of the model adheres to language-specific instructions and does not exhibit the same level of unsolicited code alteration. This suggests that the changes in Gemini 3’s architecture and training may have inadvertently introduced this problematic behavior.
Request to the Google AI Team
I urge the Google AI team to investigate this issue. A reliable and predictable AI coding assistant is an invaluable tool, but its utility is severely diminished when it fails to follow fundamental instructions. The ability to specify and remain within a particular programming language is a critical requirement for developers.
I recommend the following actions:
-
Acknowledge and Investigate: Publicly acknowledge the issue and launch an investigation into why Gemini 3 is not adhering to language-specific and other direct instructions.
-
Provide a Solution or Workaround: In the short term, provide guidance on how to mitigate this issue.
-
Prioritize User Intent: In future updates, prioritize the model’s ability to accurately follow user instructions, especially concerning the programming language and the scope of requested changes.
I am hopeful that by bringing this to the attention of the Google AI Developer Community, we can work towards a solution that makes Gemini 3 a more reliable and trustworthy partner in the development process.