Hi community members,
I have a question that looks certainly stupid but I block on it.
In the classic method _get_serve_tf_examples_fn, there is something that I cannot understand.
def _get_serve_tf_examples_fn(model, tf_transform_output):
# We must save the tft_layer to the model to ensure its assets are kept and tracked.
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')])
def serve_tf_examples_fn(serialized_tf_examples):
# Expected input is a string which is serialized tf.Example format.
feature_spec = tf_transform_output.raw_feature_spec()
# Because input schema includes unnecessary fields like 'species' and
# 'island', we filter feature_spec to include required keys only.
required_feature_spec = {
k: v for k, v in feature_spec.items() if k in _FEATURE_KEYS
}
parsed_features = tf.io.parse_example(serialized_tf_examples, required_feature_spec)
# Preprocess parsed input with transform operation defined in preprocessing_fn().
transformed_features, _ = _apply_preprocessing(parsed_features, model.tft_layer)
# Run inference with ML model.
return model(transformed_features)
return serve_tf_examples_fn
1- the transform layer is added to the model. Ok I understand
2- the serve_tf_examples() method is called on the raw data, applies the transform layer on features, and then calls a model() on the transformed features.
But why should I create the processed features with the transform layer whereas I have added this layer to the model first?
I am certainly missing something here.
Any help is appreciated
Thanks
Jerome