Gemini 3.0 is TOO EXTRA in the AI studio, it keeps doing things I didn't ask for

OK, at first I tried to navigate through this, but it’s unbearable, I can’t perform simple tasks without it changing a lot of stuff on its own.

One of the craziest examples, I was working with the hero slider of my site, and it decided to change my logo out of thin air. It just created a new SVG logo and replaced my logo. When I asked what the reason was for the change, it said that it just thought it was a good idea and would suit the layout. IT DID NOT, and nobody asked for it.

Calm down, Gemini, these extra things you do just make me restore the backup and start again, and lose time on all that.

p.s. It was a fresh iteration with proper instructions. This didn’t happen because of the context overload. It seems like Gemini is doing more extra things, with the bigger context window it has. When the context window is loaded a bit, it starts to act normally. I’m so tired of this.

p.s.2 - The most annoying thing is, sometimes it doesn’t tell me about the updates it decided to make, and I find subtle things have changed in my app/site. It breaks things, changes layout, and logic, without ever notifying me. After finishing the work, I then have to go through everything for those unasked changes.

Hello :waving_hand:

I understand I’ve had issues with this in the past as well but something I’ve learned to do is get your method down and it shows less of these issues it’s an extra step sure but hey nothing is perfect but do apply structured guidelines to your prompts it helps promise:

1. **Explicit Instructions:**

\*   Be extremely specific with your requests. For example, instead of saying "modify the hero slider," say "adjust the padding of the hero slider's text container to 16dp, and nothing else."

\*   Clearly define the scope of each task. Use phrases like "Only change X, do not modify Y or Z."

2. **Negative Constraints:**

\*   Use prompts like, "Do not make any changes to the logo."

\*   State explicitly what should \*not\* be altered.

3. **Verification Step:**

\*   Add "Pause and ask for verification before making any changes" to the end of your prompt.

\*   Require confirmation before any modifications are implemented.

4. **Checkpoints:**

\*   Request frequent check-ins by the model, like "After each modification, report the changes made."

5. **Rollback Strategy:**

\* Always maintain backups of your projects. If unexpected changes occur, revert to the last known good state.

\*   Understand that 3.0 may introduce new bugs as is expected of early releases. The community is here to help and learn.

6. **Comparison to other Programming models:**

\*   Remember that AI models are not perfect and can make errors; always carefully review the AI’s output and never rely solely on it. Always double-check everything.

~ CMD. Proton hope this helps :slightly_smiling_face::vulcan_salute:

2 Likes

@CMD_Proton Your reply is ofcourse nice and decent, but I’m afraid something else is wrong here. I experience the exact same thing. I vibe coded without any problem the last couple of months, ofcourse sometimes weren’t perfect at first try, but my site development into a good prototype worked really fine.

Now, however, I experience the exact same thing as @aroshidze , when asking to change the colors of some labels, whole parts of my site suddenly disappear without even asking, other parts have been heavily redesigned etc. It’s not a matter of good prompting, it seems suddenly AI Studio seems to not listen anymore, resulting in doing things I didn’t ask for, and simply not doing the things I did ask for.

Before, resetting the conversation on a regular basis fixed this problem and I could continue, but now, this does not work anymore. It just seems there is some strange hallucinating thing going on, are there more people experiencing this?

2 Likes

Yes! THIS!!! I was just about to post about this and how Gemini 3.0 (and when I used to use 2.5 for that matter) will sometimes randomly change/remove/refactor code when not specifically requested, thus a constant review of every feature to make sure things haven’t suddenly become broken and/or are different.

I even wrote in the instructions for it to NOT change features of the app unless specifically told to do so yet it does it anyways. Why, Google??? Otherwise, love using Gemini pro 3!! It’s amazing. :slight_smile:

P.S those tips CMD-Proton gave are a very good guideline.

2 Likes

Hi @aroshidze , @St_Michael_the_Archa , @DeNachtwacht Thank you for your feedback.

Could you please provide any example prompt along with relevant output screenshot which highlights this issue? This will help us investigate this issue effectively.

Please refer to our official documentation for comprehensive details on Prompt design strategies: https://ai.google.dev/gemini-api/docs/prompting-strategies

From my experience it doesn’t seem to be specific words in the prompt but that if the prompt itself is really long then Gemini 3 goes haywire and messes things up - having the restore to previous version button is a lifesaver.

I have learned to instruct it in bite size prompts now when I have a project that has a lot of code already, compared to writing a really long prompt when just getting started and not worrying about anything since the website is just being made.

You may want to read this:

2 Likes

Thanks @CMD_Proton and @paulvancotthem for the great suggestions in the thread!

I’ve tried everything suggested - giving strict instructions, explicitly asking for “No code to be written”, and focusing on one tiny change at a time. But honestly, sometimes the model just hallucinates and starts changing things anyway. I’ve literally put the instruction not to change the code at the beginning, the middle, and the end of the prompt, and it still ignores me.

The most annoying part is how it forgets its own progress. I’ll ask it to change one thing, it does it perfectly, and then when I move on to a new topic, it redoes the first task all over again. It’s like it forgets the job is already finished. Even when I tell it explicitly that a task is complete and to stop doing it, it keeps going.

Using 3.0 sometimes feels like working with an overly excited Labrador. It’s eager to help, but it’s so hyper-focused on “doing something” that it creates more work than it saves. I’ve lost hours this week just cleaning up after the mess.

1 Like

My advice to you would be to establish a working protocol between you (User) and the AI (Assistant) within within your custom System Instructions. (So not within your prompts).

In this context, I have rewritten my post, linked below, perhaps it may inspire you to design your own protocol.

1 Like

I’ve been experiencing nuance problems in Gemini 3.0 pro even on AI Studio. The model’s personality handling and nuance handling seems one dimensional and it will often ignore instructions. It’s not related to the recent update, it’s been a problem since Gemini 3.0 pro experimental released. I’m hoping the release version is actually better than 2.5 pro before I switch to it. I’ve experienced the same problems with Gemini 2.5 pro on launch too and I hope they fix these major problems before the full version of 3.0 pro is out to make it worth the title of pro.