Hello,
I have a model that i can train with normal model.fit(). Everything works well and training and validation accuracy rises as expected.
The problem is that i want to use generators then model is unable to train, accuracy goes back and forth between some fixed low values.
I do not make any changes to data or the model. I just add the generators like this:
class DataGenerator(Sequence):
def __init__(self, x, y, batch_size):
self.x = x
self.y = y
self.batch_size = batch_size
self.num_samples = x.shape[0]
self.num_batches = int(np.floor(self.num_samples / self.batch_size))
def __len__(self):
return self.num_batches
def __getitem__(self, index):
start_idx = index * self.batch_size
end_idx = (index + 1) * self.batch_size
batch_x = self.x[start_idx:end_idx]
batch_y = self.y[start_idx:end_idx]
return batch_x, batch_y
train_generator = DataGenerator(train_x, train_y, 32)
val_generator = DataGenerator(test_x, test_y, 32)
mymodel.fit(train_generator, epochs=num_epoch, steps_per_epoch=len(train_generator), validation_data=val_generator, validation_steps=len(val_generator), callbacks=[model_checkpoint_callback, custom_print_samples])
Whereas this works:
mymodel.fit(train_x, train_y, epochs=999, batch_size=32, verbose=2, validation_data=(test_x, test_y), callbacks = [model_checkpoint_callback, custom_print_samples])
I know there was a bug like this with the fit_generator function. Could it be that the old bug also affects model.fit usage with generators?
Tensorflow version: 2.12.1