I don’t understand why Gemini decides to re-write every script when telling it to change x, y or z and not just copy and paste code that is not going to be changed?
For example, if I have a website with a script that has 100 lines of code and I write my prompt to change a button on the homepage to a different color, which said button is on line 10 of the script, it will change that little bit of code to fulfill what I requested but the stupid thing is it then it continues rewriting the rest of the 90 lines of code, even though the rest of the script isn’t changed in any ways. Why doesn’t it just copy and paste the 90 lines of unchanged code from the script for code that is unchanged?
It seems like every LLM operates in this manner and is an inefficient way of writing code and wastes time and resources. Please someone make it make sense.
LLMs don’t edit code. They regenerate text.
So when you ask “change this button color”, the model is not modifying line 47 — it is predicting a completely new 100-line file whose most likely correct version contains your requested change.
Everything else flows from that.
it rewrites the whole program from the beginning, making sure the button is red this time. To the AI, your 100 lines of code are just one long piece of text, like a story, and the only thing it knows how to do is retell the story again but with the small change inside it.
I see. I mean, can’t LLMs be programmed to do what I am describing though? Tell it to just copy and paste previous, working, unchanged code once it has made the necessary changes requested.
Yes but not by just prompting the LLM itself. A language model doesn’t actually “have” your file or edit it in place every reply it produces is a freshly generated block of text based on probabilities, so even if you ask it to keep everything the same, it’s still re-creating the whole code and merely approximating the unchanged parts.
To truly copy-paste only the unchanged lines and modify specific ones, the system needs extra software around the model that stores the real file, compares the old and new versions, and applies only the differences (a diff/patch).
So yes, it’s possible in practice, but it requires a tool built around the LLM, not a different prompt or a small tweak to the model itself.
Gochya. Would be so nice if they found something that worked like what you are describing. I would imagine it would save a lot of time and resources when using LLMs.