HI,
I would like to fine tune VGG16 by adding a sigmoid activation function to last fully connected layer; then fed into an LSTM through TimeDistributed
Can you please check below modified from the above code and let us know.
#Load the VGG16 model:
vgg_model = VGG16(weights='imagenet', include_top=True, input_shape=(224, 224, 3))
#Freeze the weights of the VGG16 model
for layer in vgg_model.layers:
layer.trainable = False
# Add a sigmoid activation function to the last fully connected layer
vgg_model.layers[-1].activation = 'sigmoid'
# Add a TimeDistributed layer to the VGG16 model
time_distributed = TimeDistributed(vgg_model)
# Add an LSTM layer to the TimeDistributed layer
lstm = LSTM(256, activation='relu')
# Add a Dense layer to the LSTM layer
dense1=dense = Dense(64, activation='relu')
## Add a droupout layer to the LSTM layer
dropout=model.add(Dropout(0.5))
# Add a Dense layer to the LSTM layer
dense2 = Dense(5, activation='linear')
# Compile the model
model = Sequential()
model.add(time_distributed)
model.add(lstm)
model.add(dense1)
model.add(dropout)
model.add(dense2)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=10)
# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)