Hello everybody
I implemented a service in Java (Grails) that runs on a Docker machine. The service uses tensorflow 2.10.0 and tensorflow-text 2.10.0 installed on Docker machine’s Linux operating system.
Because I need Application.NLP.TEXT_EMBEDDING for text similarity search I need to load library _sentencepiece_tokenizer.so from tensorflow-text in my Java service.
But when I use the service it fails with error message:
undefined symbol: _ZNK10tensorflow8OpKernel11TraceStringB5cxx11ERKNS_15OpKernelContextEb
When I use older versions of tensorflow and tensorflow-text, fo example 2.2.0, I get the same error.
Can you tell where is the problem in my implementation?
Java Code:
-----Dependencies-----
compile 'org.tensorflow:tensorflow:1.15.0'
//compile 'org.tensorflow:libtensorflow_jni:1.15.0'
compile 'org.tensorflow:tensorflow-core-platform:0.4.2'
compile "ai.djl:api:0.19.0"
runtime "ai.djl.tensorflow:tensorflow-engine:0.19.0"
runtime "ai.djl.tensorflow:tensorflow-model-zoo:0.19.0"
//runtime "ai.djl.tensorflow:tensorflow-native-auto:2.4.1"
runtime "ai.djl.tensorflow:tensorflow-native-cpu:2.7.0"
-----Service methods-----
public static double[][] predict(String[] inputs) {
// only EN: https://storage.googleapis.com/tfhub-modules/google/universal-sentence-encoder/4.tar.gz | file size ~ 1 GB
// multilanguage: https://storage.googleapis.com/tfhub-modules/google/universal-sentence-encoder-multilingual/3.tar.gz
String modelUrl = "https://storage.googleapis.com/tfhub-modules/google/universal-sentence-encoder-multilingual/3.tar.gz"
Criteria<String[], double[][]> criteria =
Criteria.builder()
.optApplication(Application.NLP.TEXT_EMBEDDING)
.setTypes(String[], double[][])
.optModelUrls(modelUrl)
.optTranslator(new MyTranslator())
.optEngine("TensorFlow")
.optProgress(new ProgressBar())
.build()
//library file needed for universal-sentence-encoder-multilingual because library not included in this model file; library file only available for linux systems
TensorFlow.loadLibrary("/usr/local/lib/python3.7/dist-packages/tensorflow_text/python/ops/_sentencepiece_tokenizer.so")
try {
ZooModel<String[], double[][]> model = criteria.loadModel()
Predictor<String[], double[][]> predictor = model.newPredictor()
return predictor.predict(inputs);
} catch (final Exception ex) {
log.error(ex)
}
}
private static final class MyTranslator implements NoBatchifyTranslator<String[], double[][]> {
@Override
NDList processInput(TranslatorContext ctx, String[] raw) {
NDManager factory = ctx.NDManager
NDList inputs = new NDList(raw.collect { factory.create(it) })
new NDList(NDArrays.stack(inputs))
}
@Override
double[][] processOutput(TranslatorContext ctx, NDList list) {
long numOutputs = list.singletonOrThrow().shape.get(0)
NDList result = []
for (i in 0..<numOutputs) {
result << list.singletonOrThrow().get(i)
}
result*.toFloatArray() as double[][]
}
}