Hi,
Is there an efficient way to update only the unpruned weights of a pruned TF model during model retraining?
One way to do this is to keep a mask and multiply the gradient by mask.
But this takes additional time due to extra calculations.
I’m looking for an efficient way to do this without additional calculations.
Highly appreciate any suggestion on this.
Hi @Sampath_Rajapaksha,
Sorry for delayed response. Using Sparse Tensor representation, you can efficiently update Unpruned weights of a model by representing the gradient as a sparse tensor.`gradient = tf.sparse.from_dense(gradient). Tensorflow supports optimized operations for sparse tensors as well for reducing computational overhead. Please find the reference here for optimized structural pruning.
Thank You