I’m running into a serious issue with Google AI Studio (Build mode) that seems to have started very recently.
When I take a large existing source file and ask for purely cosmetic or structural changes (such as improving readability, formatting, or reorganizing code without altering behavior), the output no longer matches the original functionality.
What I’m seeing:
Original file: roughly 1600 lines
Task: reformat / clean up / refactor without changing logic
Result:
The generated file is much smaller (around 400–500 lines)
There is no message indicating that content was skipped
However, many parts of the original implementation are simply gone
The tool still claims the operation succeeded
This does not look like a normal output-length limit.
Instead, it seems Build mode is:
Automatically deciding which sections of code can be dropped
Collapsing or removing entire features
Returning code that is no longer behaviorally equivalent to the input
This makes it risky to use for:
Large React / TSX / Vue components
Reformatting or reorganizing existing project files
Minor edits on established codebases
Along with other recent problems (the internal reload option disappearing, frequent “data was moved” errors, and preview mismatches), this feels like a regression in how Build mode handles state or code generation.
I’m curious:
Are others seeing Build mode strip functionality during refactors?
Did something change recently in how Build mode handles large files?
Is this already a known problem?
Right now, I wouldn’t trust Build mode for non-functional refactors on big files, since it no longer reliably preserves behavior.
1600 lines of code in a single file is approaching ‘danger territory’. Especially if there are no comments explaining what the code does.
Whenever I have a file that approaches 800-1000+ lines, I know it’s time to decompose the file. No single file should be 1600+ lines.
From my experience, most of these models can handle 1k lines, but at 2k+, it starts breaking down. You’re simply asking too much of the system.
So, my advice: Refactor your code into smaller files, with a clear purpose and with good comments. The agent will have a much easier time parsing and understanding it. Especially if that 1600+ LOC file you have, is performing a bunch of different specialized tasks.
Unfortunately this is not an error, but rather a fundamental part of how AI Studio operates that is not well publicised.
Despite how it appears when you first use it, AI Studio does not edit code. Every time you request a change in the chat (either explicitly or if it interprets an implicit instruction), any affected files are recreated from scratch. While the files remain relatively simple, it will be able to hold the existing code in its memory and thus the newly regenerated file will be exactly the same except for the necessary changes. It will thus look like it has simply edited the file even though it has actually output the file from scratch.
At some point the file is too big to hold the existing code in memory and instead the app will recreate it by working from its understanding of what the file is meant to do. At that point its system instruction to make code shorter/simpler/more efficient overrides any other instruction.
You can fight this process somewhat by explicitly ordering it to make only specific changes and change no code unnecessarily, but this will only work until the file hits a certain size.
If this is only happening on refactors for large files, for now you could try:
Regular smaller refactors so the files never get too large (e.g. regularly spotting when it’s time to abstract code and asking the agent to do it preemptively).
If you need to refactor a very large file, write notes on all the features in the file and instruct the agent to keep them.
I’ll start a conversation with the engineering team about the best approach for solving this from our side.