I’m doing a simple binary classification model with images. The code to compile and train the model is very straightforward:
Compile code:
model.compile(
loss= "binary_crossentropy",
optimizer = optimizers.Adam(
learning_rate = learning_rate,
epsilon = 0.1,
),
metrics=['binary_accuracy'],)
Training code:
history = model.fit(
train_ds,
validation_data = val_ds,
epochs = epochs,
verbose = 1,
callbacks = [callback])
It all runs fine, but I find the validation accuracy is lower than hoped for. Since these are images, one of the things I want to do is to actually view the images that fail validation, and see if that gave me any insights.
My questions are therefore:
a) Is this a reasonable thing to do? (And if not, why not?)
b) If reasonable, are there any standard ways to view this kind of ‘fails validation’ data?, and
c) If no standard ways to view this data, any hints for how to go about it myself.
I’m a bit perplexed right now because viewing images that fail validation seems so obvious to me but I really can’t seem to find discussions of it online, so either it’s not done for some reason, or I am using the wrong search terms to try and find the discussions
Thanks a lot! Really hope someone replies to this.