Hi everyone,
I’m a bioinformatics scientist and a relatively new programmer. I’m still learning the ropes of server management and terminal environments, so I apologize if this is a basic question or if I’m missing something obvious!
I’ve been really enjoying the Antigravity workflow, but I’ve noticed a curious technical discrepancy regarding memory management that I’m hoping to understand better.
The Situation: I’m running a multi-threaded file download (8 threads) on a remote server with nearly 2TB of RAM. I usually use nohup to keep the process running in the background.
The Observation: I noticed a major difference in memory consumption depending on how I connect to the server:
-
Via standard SSH: Memory usage stays stable at around 40GB.
-
Via Antigravity Terminal: Using the exact same command, the RAM usage (RSS) starts at 40GB but quickly climbs to over 160GB and keeps increasing.
A few details:
-
I’m not using any AI Agents or active prompts during this process—just the terminal connection.
-
I’m worried that the terminal emulator or an environment-specific setting might be buffering I/O or terminal logs in a way that leads to this 4x memory spike.
Since I deal with large-scale multi-omics data, this memory overhead is quite significant for my work. Has anyone else seen this kind of behavior, or is there a specific setting in Antigravity I should adjust for high-I/O background tasks?
Thanks in advance for your patience and any insights!