I’m working on my first machine learning project in Python - using TensorFlow to try and syllabify words using the Moby Hyphenator II dataset.
I am treating this as a multi-label classification problem in which words and their syllables are encoded in the following format:
T e n - s o r - f l o w
0 0 1 0 0 1 0 0 0 0
When reading through this guide as a starting point, I saw that the author used a custom function - they averaged weighted binary cross-entropy with the root mean squared error in PyTorch as such:
def bce_rmse(pred, target, pos_weight = 1.3, epsilon = 1e-12):
# Weighted binary cross entropy
loss_pos = target * torch.log(pred + epsilon)
loss_neg = (1 - target) * torch.log(1 - pred + epsilon)
bce = torch.mean(torch.neg(pos_weight * loss_pos + loss_neg))
# Root mean squared error
mse = (torch.sum(pred, dim = 0) - torch.sum(target, dim = 0)) ** 2
rmse = torch.mean(torch.sqrt(mse + epsilon))
return (bce + rmse) / 2
I have tried to implement this in TensorFlow in the following way:
def weighted_bce_mse(y_true, y_prediction):
# Binary crossentropy with weighting
epsilon = 1e-12
positive_weight = 4.108897148948174
loss_positive = y_true * tf.math.log(y_prediction + epsilon)
loss_negative = (1 - y_true) * tf.math.log(1 - y_prediction + epsilon)
bce_loss = np.mean(tf.math.negative(positive_weight * loss_positive + loss_negative))
# Mean squared error
mse = tf.keras.losses.MeanSquaredError()
mse_loss = mse(y_true, y_prediction)
averaged_bce_mse = (bce_loss + mse_loss) / 2
return averaged_bce_mse
On doing so, I receive the error ValueError: 'outputs' must be defined before the loop.
and I’m not sure why as I define this function before I build and compile my model.
I’m using the Keras Functional API, and my compilation and fit stages are:
model.compile(optimizer="adam", loss=weighted_bce_mse, metrics=["accuracy"], steps_per_execution=64)
history = model.fit(padded_inputs, padded_outputs, validation_data=(validation_inputs, validation_outputs), epochs=10, verbose=2)