How to copy data from gpu to cpu when saving checkpoints

I am interested in tensorflow saving model or weights workflow. So I read source code and I don’t find the logic of copying data from gpu to cpu. If this model ran in gpu, it should copy data and meta from gpu to cpu when saves checkpoints. Like Pytorch, it uses Cudamemcpy or other copy function. But I don’t see any function to copy data in tensorflow. I am confused

@Freya_Rao Welcome to the forum !

In TensorFlow, you can use the tf.identity() function to move tensors between CPU and GPU memory. Although it does not have an exact equivalent to the cudaMemcpy() function in PyTorch, tf.identity() can achieve similar functionality.

Also, note that TensorFlow automatically manages data transfers between CPU and GPU memory when using operations within the TensorFlow computation graph. However, if you need explicit control over the data movement, tf.identity() can be used to make explicit copies between CPU and GPU memory.

Here are the steps to copy data from CPU to GPU and vice versa:

  1. Create a tensor on the CPU :

    tensor_cpu = tf.constant([1, 2, 3])
    
  2. Copy the tensor to the GPU using tf.identity():

    tensor_gpu = tf.identity(tensor_cpu).gpu()
    
  3. Perform operations on the tensor on the GPU.

  4. Copy the tensor back to the CPU using tf.identity():

    tensor_cpu_copy = tf.identity(tensor_gpu).cpu()
    

Let us know if this solves your query.

Thanks your reply ! I have a deeper question: What is the bottom layer of tf.identity()? I don’t see bottom layer of tf.identity() in the source code. I am confused that if you want to operate data between GPU and CPU, you should use library, like cudalib.