The same script for PyTorch works but PyTorch provides special pypi wheels that supports the Nvida A100 card. I guess something similar exists for TensorFlow but I don’t know where.
You don’t need -gpu. pip install tensorflow is both CPU and GPU (see Install TensorFlow 2). But I would use the docker image. Tensorflow versions 2.5 to 2.9 (current) use cuDNN v8.1 and CUDA v11.2 but that can change in the future. Being able to change TF versions without having to fidget with CUDA is great. And it makes the difference between remote and local development smaller so there’s less overhead when you switch between the two.
But what are the Python, cuDNN, and CUDA versions in your environment?
Thanks Mog for the hint about using docker containers. Makes sense.
In case, if anyone don’t want to use containers, it is possible to use Nvidia stuff via Miniconda. Nvidia provides python packages to wrap their drivers but only on Conda (and not PyPi)