Hi all;
I am attempting to implement Conv-TasNet for a course project. My train_step code is at:
I’ve read the docs concerning possible causes of having all Nones returned by tape.gradient; I’ve made sure to remove any NumPy code from my custom loss function, and am not quite sure what else may be causing the issue.
I don’t think the error is caused by the loss function; I ran a debugging session with a built-in loss function, and the same result (all Nones) occurs.
The trainable_variables are tracked by tape, meaning the loss should be applied to a list of Variables as intended; so this should work, at least according to my knowledge of TensorFlow and Keras – which only spans about 6 weeks at the time of this posting.
I used the MelGAN example as a guide for how to implement a similar model; a main difference I can see in their code and mine is, I am not using (what appears to be) functional API concepts in my ConvBlock. Could that be the issue?
Regarding system/env:
Mac M1 Max
Python 3.9.7
TensorFlow/Keras 2.7
Conda environment is managed with miniforge.