Viewing 'failed validation' images

I’m doing a simple binary classification model with images. The code to compile and train the model is very straightforward:

Compile code:

        model.compile(
            loss= "binary_crossentropy",
            optimizer = optimizers.Adam(
                learning_rate = learning_rate, 
                epsilon = 0.1,
            ),
        metrics=['binary_accuracy'],)

Training code:

   history = model.fit(
        train_ds,
        validation_data = val_ds,
        epochs = epochs,
        verbose = 1,
        callbacks = [callback])

It all runs fine, but I find the validation accuracy is lower than hoped for. Since these are images, one of the things I want to do is to actually view the images that fail validation, and see if that gave me any insights.

My questions are therefore:
a) Is this a reasonable thing to do? (And if not, why not?)
b) If reasonable, are there any standard ways to view this kind of ‘fails validation’ data?, and
c) If no standard ways to view this data, any hints for how to go about it myself.

I’m a bit perplexed right now because viewing images that fail validation seems so obvious to me but I really can’t seem to find discussions of it online, so either it’s not done for some reason, or I am using the wrong search terms to try and find the discussions :slight_smile:

Thanks a lot! Really hope someone replies to this.

Hi @GohOnLeeds, As per my knowledge removing misclassified images from the validation dataset, may result in removing valuable information that could help you to understand the model weaknesses. As your validation accuracy lower that test accuracy this is the problem of over fitting. There are few techniques to overcome this problem

  • Data augmentation: If the training set has less number of images you use data augmentation to improve the samples of the dataset.
  • Using DropOut Layer: When you apply dropout to a layer, it randomly drops out a number of output units from the layer during the training process.

Thank You!