I have 2 identical containers. One runs tensorflow 2.x from the default pip repo and the other runs tensorflow 2.x that’s self-compiled. They both run the same script - that has clears all variables (del variable) that caused the memory leak and gc.collect() after clearing the variables. The self-compiled tensorflow container has no more memory leaks and things run as expected, with the memory consuming variables clearing on each loop iteration within the script. The default tensorflow from the default pip repo continues to show a major memory leak, with some object types’ memory slowly growing on each loop iteration, sort of how it was using the self-compiled tensorflow before I implemented some variable dereferencing/clearing. I’m stumped. Why is tensorflow from the repo not garbage collecting???
EDIT: garbage collection doesn’t fail entirely - I am seeing some garbage collected during loop iteration but some objects keep accumulating unlike on the other container.
Welcome to the TensorFlow Forum!
Please provide us some more details like which system OS, TensorFlow / Python version you have used along with the minimal reproducible code to replicate and understand the issue. Thank you.