When fitting a model, I’m getting a strange error message
TypeError: Expected variant passed to parameter 'encoded_ragged_grad' of op 'RaggedTensorToVariantGradient', got <tensorflow.python.framework.indexed_slices.IndexedSlices object at 0x7fc7ba7131f0> of type 'IndexedSlices' instead. Error: Value passed to parameter 'data' has DataType variant not in list of allowed values: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, qint16, quint16, uint16, complex128, float16, uint32, uint64
I’ve searched the documentation, but the only references I can find to encoded_ragged_grad
or RaggedTensoToVariantGradient
are in the Java documentation. I am using a custom Hyena layer in my model, but I can see no reason why this should be confusing the gradient calculation, and I have confirmed that the dtype of every layer in the model is float32
. Can anyone suggest what is wrong?
Hi @Peter_Bleackley,
Below are my thoughts and you can get some idea to over come this problem:
The reason why you are getting this error is because the fit()
method of your model is returning a Tensor
object, not a variant
object. This is probably because you are using a custom Hyena layer in your model.
To fix this error, you need to change the return type of the fit()
method of your model to variant
. You can do this by changing the return type of the call()
method of the Hyena layer to variant
.
Below is the just pseudo code:
class HyenaLayer(tf.keras.layers.Layer):
def call(self, inputs):
return tf.ragged.constant([1, 2, 3], dtype=tf.variant)
You can refer for more details in TensorFlow Data types and Ragged tensor docs.
I hope this helps!
Thanks
Thanks. Taht gets me a little further. Presumably, I need to use tensorflow.raw_ops.RaggedTensorToVariant
to convert my result to a variant. Given that, as would be expected for an NLP model, the ragged tensor y
represents a batch of variable length sequences of vectors, so its size is (None,None,width)
, what arguments should I supply?