def define_model(vocab_size, max_length, curr_shape):
inputs1 = Input(shape=curr_shape)
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
model = tf.keras.models.Sequential()
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, 256, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
decoder1 =Concatenate()([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
model.summary()
return model
the model as follows
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 49)] 0
__________________________________________________________________________________________________
input_1 (InputLayer) [(None, 1120)] 0
__________________________________________________________________________________________________
embedding (Embedding) (None, 49, 256) 6235648 input_2[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 1120) 0 input_1[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 49, 256) 0 embedding[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 256) 286976 dropout[0][0]
__________________________________________________________________________________________________
lstm (LSTM) (None, 256) 525312 dropout_1[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 512) 0 dense[0][0]
lstm[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 131328 concatenate[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 24358) 6260006 dense_1[0][0]
i set history = model.fit(train_generator, epochs=1, steps_per_epoch=train_steps, verbose=1, callbacks=[checkpoint], validation_data=val_generator, validation_steps=val_steps)
and got one sentence in model.predict() every time … how can i defined number of epochs or learning rate well to make model better. My dataset is COCO while training set is 82700 and testing is 40500 . The goal for the model is to make image captioning