So I have some categorical features and I run them through StringLookup layers to transform the data before training on a GPU. The model will be served on CPU so I can add those layers to the model after training. And that works and the new model runs fine (with StringLookup integrated in the model). But if I attempt to convert to TFLite and run it I get all manner of error messages, mostly about a convolutional layer. But the model converted and ran fine before I added the StringLookup. If I look online it seems that people can’t use TFLite (or ONNX) with StringLookup but then there are others that say “This was fixed in this issue” and point to some Github issue that doesn’t seem to solve it.
I just want to make sure that it actually isn’t supported as of TF 2.11
Because I would really, really appreciate it if it worked.
Edit: PS I assume that this also means I can’t add tokenizers to my models if I want things to run faster in slimmer docker images with less code overhead and fewer meta-data objects to manage.
It’s okay. I made a big fat model with only this kind of preprocessing and then I map it on a dataset. I figure that this part probably wouldn’t be much faster if converted anyway. I would have liked to have gotten to a point where my deployed images don’t have full tensorflow libraries but not today (read: the devops guys would have liked that).
But… Does this mean that if I use the Keras lookup layers for data pre/post-processing then I can’t build a serving image without a full 4-5 GB Tensorflow install in it?