Training speed of cnn model is too slow even after using google colab

i am using a pretrained VGG16 model to classify 10000 images of 8 classes , but training 1 epoch is itself taking 40 min after using a gpu , what might be the problem?

Hi @Sai_Parayan

Could you please share minimal reproducible code to replicate the error and to understand the issue? Thank you.

`[quote=“Renu_Patel, post:3, topic:22397, full:true”]
Hi @Sai_Parayan

Could you please share minimal reproducible code to replicate the error and to understand the issue? Thank you.
[/quote]

import tensorflow as tf
import os
from google.colab import drive
drive.mount(‘/content/drive’)

Load and preprocess your data

from tensorflow.keras.preprocessing.image import ImageDataGenerator

Define the image size

image_size = (224, 224) # Adjust based on your model’s input size

Use an ImageDataGenerator to load and preprocess your data

datagen = ImageDataGenerator(
rescale=1.0/255, # Normalize pixel values to [0, 1]
validation_split=0.2 # Adjust the validation split as needed
)

Load and preprocess the training data

train_data = datagen.flow_from_directory(
‘/content/drive/MyDrive/Colab Notebooks/speckle’,
target_size=image_size,
batch_size=64, # Adjust the batch size as needed
class_mode=‘categorical’, # Use categorical for one-hot encoded labels
subset=‘training’
)

Load and preprocess the validation data

val_data = datagen.flow_from_directory(
‘/content/drive/MyDrive/Colab Notebooks/speckle’,
target_size=image_size,
batch_size=64, # Adjust the batch size as needed
class_mode=‘categorical’, # Use categorical for one-hot encoded labels
subset=‘validation’
)
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.applications import ResNet50
from tensorflow.keras import layers, Model
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt

Load the pre-trained VGG16 model without the top classification layers

base_model = VGG16(weights=‘imagenet’, include_top=False, input_shape=(224, 224, 3))

Freeze the weights of the pre-trained layers

for layer in base_model.layers:
layer.trainable = False

#Add custom classification layers

x = base_model.output
x = layers.Flatten()(x)
#x = layers.BatchNormalization()
#x = layers.Dense(64, activation = ‘relu’)(x)
outputs = layers.Dense(8, activation=‘softmax’)(x)
model = Model(inputs=base_model.input, outputs=outputs)
model.summary()
model.compile(optimizer=‘adam’, loss=‘categorical_crossentropy’, metrics=[‘accuracy’])

train_data = datagen.flow_from_directory(
‘/content/drive/MyDrive/Colab Notebooks/speckle’,
target_size=image_size,
batch_size=64,
class_mode=‘categorical’,
subset=‘training’,
shuffle=True
)
val_data = datagen.flow_from_directory( ‘/content/drive/MyDrive/Colab Notebooks/speckle’,
target_size=image_size,
batch_size=64,
class_mode=‘categorical’,
subset=‘validation’,
shuffle=False
)

total_batches = len(train_data) * 5

from tensorflow.keras.callbacks import ModelCheckpoint
checkpoint_path = ‘/content/drive/MyDrive/weights.{epoch:02d}-{val_loss:.2f}.h5’
checkpoint_callback = ModelCheckpoint(checkpoint_path, save_weights_only=True, save_freq= total_batches,monitor=‘val_accuracy’, mode=‘max’, save_best_only=True)

epochs = 75 # Adjust the number of epochs
history = model.fit(train_data, validation_data=val_data, epochs=epochs,callbacks=[checkpoint_callback])