Yes but not by just prompting the LLM itself. A language model doesn’t actually “have” your file or edit it in place every reply it produces is a freshly generated block of text based on probabilities, so even if you ask it to keep everything the same, it’s still re-creating the whole code and merely approximating the unchanged parts.
To truly copy-paste only the unchanged lines and modify specific ones, the system needs extra software around the model that stores the real file, compares the old and new versions, and applies only the differences (a diff/patch).
So yes, it’s possible in practice, but it requires a tool built around the LLM, not a different prompt or a small tweak to the model itself.