I use “accuracy” as a metric in the model.compile method for a multi-class multi-label classification problem. It yields poor accuracy numbers in the model.fit and model.evaluate methods. However, when I use “y_hat = model.predict(X_val)” and compare the results to Y_val, the accuracy is close to 99%. Can someone please advise where I did wrong?

The X_train has the shape of (11050,403) and the Y_train has the shape of (11050,5). When I use the evaluate method, it yields 0.336 accuracy. Then I try the following:

In multi-class multi-label classification problems, the “accuracy” metric as defined in Keras is not appropriate because it expects that only one class is the correct prediction for each sample, which is the scenario for single-label classification problems. Since you have a multi-label problem, where each sample can belong to multiple classes simultaneously, you need a different way to measure accuracy.

The accuracy_score from scikit-learn that you’re using after calling model.predict() and rounding the results is likely giving you a different measure of accuracy which is more suited for multi-label classification. This function computes the subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in Y_val. However, this metric can be too strict because it requires an all-or-nothing perfect match of all labels for each sample.

Instead of using ‘accuracy’, you might want to use other metrics that are better suited for multi-label classification, such as:

Hamming Loss: This measures the fraction of the wrong labels to the total number of labels. It is a more relaxed metric than subset accuracy because it doesn’t require all labels for a sample to be correct.
F1 Score: This is the harmonic mean of precision and recall, and it can be calculated for each label and then averaged across all labels, which is known as macro averaging.

@Ajay_Krishna What is the right way to use f1_score as the metric? I replaced “metrics=[‘mae’]” in the model.compile statement with “metrics=[‘f1_score’]”, but it generated errors in the model.fit.

Another metric I used is the Jaccard coefficient, ROC

Jaccard Similarity: Measures the similarity between the predicted labels and the true labels for each instance. A higher Jaccard Similarity indicates the model is accurately predicting the presence or absence of labels for each class. I remember defining it for my use case and it’s been a while.

Keep in mind that confusion matrix is for binary classification and you need to compare it between all the labels present and vice versa.

Yes, tf.keras.metrics.F1Score first appeared in the 2.13.0 release.

You may implement @Ajay_Krishna’s formula in your own code, given that it’s not available in your version of TensorFlow. Here is one implementation I found in a Stack Overflow post:

class F1_Score(tf.keras.metrics.Metric):
def __init__(self, name='f1_score', **kwargs):
super().__init__(name=name, **kwargs)
self.f1 = self.add_weight(name='f1', initializer='zeros')
self.precision_fn = Precision(thresholds=0.5)
self.recall_fn = Recall(thresholds=0.5)
def update_state(self, y_true, y_pred, sample_weight=None):
p = self.precision_fn(y_true, y_pred)
r = self.recall_fn(y_true, y_pred)
# since f1 is a variable, we use assign
self.f1.assign(2 * ((p * r) / (p + r + 1e-6)))
def result(self):
return self.f1
def reset_states(self):
# we also need to reset the state of the precision and recall objects
self.precision_fn.reset_states()
self.recall_fn.reset_states()
self.f1.assign(0)

That’s a good question. In my opinion it is always a yes or no for a class. If it is a multi class classification the class that you are trying to find out will be 1 and all other classes will be 0 and this goes on until all the classes in the images are covered. So we can use all the metrics but some of the metrics are more effective than others as well as there ease of usage.