How do I implement a dual task classification experiment using keras and tensor flow

I am talking the following code (written for an earlier version of Tensorflow from the book [Transfer learning for NLP]

input1_shape = (len(train_x[0]),)
input2_shape = (len(train_x2[0]),)
sent2vec_vectors1 = Input(shape=input1_shape)
sent2vec_vectors2 = Input(shape=input2_shape)
combined = concatenate([sent2vec_vectors1,sent2vec_vectors2])
dense1 = Dense(512, activation=‘relu’)(combined)
dense1 = Dropout(0.3)(dense1)
output1 = Dense(1, activation=‘sigmoid’,name=‘classification1’)(dense1)
output2 = Dense(1, activation=‘sigmoid’,name=‘classification2’)(dense1)
model = Model(inputs=[sent2vec_vectors1,sent2vec_vectors2], outputs=[output1,output2])


model.compile(loss={‘classification1’: ‘binary_crossentropy’,
‘classification2’: ‘binary_crossentropy’},
optimizer=‘adam’, metrics=[‘accuracy’])
history = model.fit([train_x,train_x2],[train_y,train_y2],
validation_data=([test_x,test_x2],[test_y,test_y2]),
batch_size=32, nb_epoch=10, shuffle=True)


The code above worked with older versions of Tensor flow: < 2.0

I have been trying to get it to run on TensorFlow version: 2.16.1 and made the following changes

input1_shape = (len(train_x[0]),)
input2_shape = (len(train_x2[0]),)
sent2vec_vectors1 = Input(shape=input1_shape, name=“vector1”)
sent2vec_vectors2 = Input(shape=input2_shape, name=“vector2”)

class ConcatenateLayer(Layer):
def call(self, inputs, axis=0):
return tf.concat(inputs, axis=axis)

combined = ConcatenateLayer()([sent2vec_vectors1,sent2vec_vectors2],axis=0)
dense1 = Dense(512, activation=‘relu’)(combined)
dense1 = Dropout(0.3)(dense1)
output1 = Dense(1, activation=‘sigmoid’,name=‘classification1’)(dense1)
output2 = Dense(1, activation=‘sigmoid’,name=‘classification2’)(dense1)

model = Model(inputs=[sent2vec_vectors1,sent2vec_vectors2], outputs=[output1,output2])
model.compile(loss={‘classification1’: ‘binary_crossentropy’,
‘classification2’: ‘binary_crossentropy’},
optimizer=‘adam’, metrics=[‘accuracy’, ‘accuracy’])

history = model.fit([train_x, train_x2], [train_y, train_y2],
validation_data=([test_x, test_x2], [test_y, test_y2]),
epochs=10, shuffle=True
)


I keep getting incompatible shape errors like:

Incompatible shapes: [64,1] vs. [32,1]
[[{{node gradient_tape/compile_loss/binary_crossentropy_1/logistic_loss/mul/BroadcastGradientArgs}}]] [Op:__inference_one_step_on_iterator_8871]


I have tried different batch sizes in the fit function and also tried reshaping the data. I tried to follow the error down into the tensorflow code but all to no avail.

the shape of the data is as follows:

* Object name: train_x Shape: Shape: (1400,600)
* Object name: train_x2 Shape: Shape: (1400,600)
* Object name: train_y Shape: Shape: (1400,)
* Object name: train_y2 Shape: Shape: (1400,)
* Object name: test_x Shape: Shape: (600,600)
* Object name: test_x2 Shape: Shape: (600,600)
* Object name: test_y Shape: Shape: (600,)
* Object name: test_y2 Shape: Shape: (600,)

And

Input shapes:

* sent2vec_vectors1: (None, 600)
* sent2vec_vectors2: (None, 600)

Output shapes:

* output1: (None, 1)
* output2: (None, 1)

I feel I am missing something. I am struggling to find good reference examples of a similar approach in 2.16.1. Any suggestions.

I have tried reshaping the data and tried a range of batch sizes, but I am very new to tensor flow.

Hi @Sean_O_Suilleabhain, I have tried to train the multi classification model using the random data with the shapes as you mentioned I did not face any error. please refer to this gist for working code example. Thank You.

Thank you for your reply. Its been a while since I submitted the question, so I reran the code and yes, it runs now without errors.