I’m training SegNet architecture model and my dataset contains images of different resolutions. For this reason, I need to use ragged tensors when training. To do this, I wrote the simplest layer:
class RaggedToDenseTensor(tf.keras.Layer):
def __init__(self, **kwargs): super(RaggedToDenseTensor, self).__init__(**kwargs) def call(self, inputs): if type(inputs) is tf.RaggedTensor: inputs = inputs.to_tensor() return inputs
This layer accepts a tensor from Input:
x = layers.Input(shape=(None, None, 3)) # 3-channel RGB image
x = RaggedToDenseTensor()(x)
…
There are no error on eager mode. But if I do not use this mode, then this error occurs:
…
target = tf.convert_to_tensor(target)
TypeError: Failed to convert elements of tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor(“data_4:0”, shape=(None,), dtype=float32), row_splits=Tensor(“data_6:0”, shape=(None,), dtype=int64)), row_splits=Tensor(“data_5:0”, shape=(None,), dtype=int64)) to Tensor. Consider casting elements to a supported type.
Maybe I don’t even need to use ragged tensors because I use mini-batch training mode and when the mini-batch ragged tensor to dense tensor conversion is done, all the training examples are padded with zeros to have the same shape. Maybe do it before calling Model.fit method and save time this way?
Or in such cases do I need to write my own training method to be able to use batch mode?