I’ve been coding many different prototypes recently using AI Studio. I loved the experience that it provides, but when we get to a more and more complex project, then the chat tends to return the code directly instead of modifying the files, and this happens quite frequently.
I would say once every 10-15 prompts or asks. I specify the prompt not to do it, but still, it’s quite frequently returning this. Then the thread conversation gets too long, and we are out of space in the context window. Basically, to have to reset things, start a new conversation, which means that I lose the current context that I was iterating on.
Thanks for reaching out to us. To help understand this issue better, could you please share the share an example of the full prompt you are usually giving for modifying files ?
That’s what I do, but it happens so often that it gets stressful and I cannot let it run alone. I have to wait for it to think and then see if it codes or returns code in chat or says it did it but did nothing… and in the same thread this can happen even a fews times in a row, even if I say “DO NOT RETURN THE CODE IN CHAT BUT MODIFY FILES”
I am encountering this exact issue. I have been attempting to implement the generated code into my app for several days, but nothing I do forces the update to take effect. This is incredibly frustrating, as I have spent months building this application. I upgraded to the latest Gemini model expecting it to be the final tool I needed to launch my dream app; instead, it has completely halted my progress. I feel trapped in a loop of receiving code snippets that I cannot successfully integrate. I have wasted days of development time—please advise on a solution.
To save time on troubleshooting, I can confirm that I have already exhausted the standard implementation steps. I have cleared my cache, tested across multiple browsers, verified the Google Cloud Console backend, and ensured that all necessary APIs are enabled and live. The issue persists despite these measures
I had the same issue for a while. The fix: delete your chat history with the AI. In the top right of the chat panel there is a reset button that if you hover over it, says ‘Reset the conversation’. That button deletes the chat history, which is the issue causing the AI to send you the code instead of implementing it on the app. The more chat history there is, the more confused it gets with previous prompts and causes it to do that.
i´ve tried everything but nothing worked. Now it rarely change the code, most of time it just dumps de code on chat. To avoid stress, i just copy the code from chat and paste it on files manually.
Do not waste your time trying those prompts to force AI to implement the code files, thats not a issue with AI behavior, in my case i belive the AI is sending the command to change the files, but for some issue on Google IA Studio Platform the IA command to change the files is resulting on dumping the command and the code on chat.
That´s the command that aways is dumped on chat with the codes.
the tag “Change” followed by file name shows the IA is sending command to change the code.
<change>
<file>App.tsx</file>
<description>Implement conditional cascade delete: Bi-directional deletion for individual settlements, but rollback/unlink only for batch settlements.</description>
<content><![CDATA[
Because this is really a russian roulette at each prompt…
I cannot ask something and switch to another task while it works, I have to be sure that it’s not returning code in the chat each time. It breaks conversations so quickly, a real mess, undoing things all the time.
Also when we look at diff or go in the code, Chrome starts to sweat and CPU is over 100 each time. I have to close the tab and start a new one again. This is painful, as I need to check the code as if replaces parts of it with comments frequently.
Yes, but then you lose the context and if the next prompt generates shitty code, you’re forced to copy/paste code manually (in the code interface that breaks Chrome and explodes the CPU) as you cannot restore to a previous version in this case. My trick is to ask a basic prompt, like change a text so that then I have a way to restore if things go nuts later un the conversation