Deployment of tensorflow serving

Hi,

Is there any way to deploy just the preprocessing part without a trained model?

The reason I ask is because I am trying to minimize the services that I use to put a model in production.

I currently have a service that gathers data using FastApi, but I want to deploy a “model” using tensorflow-serving that only gathers data and once it has enough data, i would use the end-to-end TFX solution.

I don’t know if tensorflow-serving has a way to keep the data it used for inference or preprocessing

Thank you,

Hi @kevin ,

We can deploy the preprocessing step by including it within the model inference pipeline, but we cannot deploy the preprocessing step alone as a standalone component .

Thank You .