I have finally managed to get a working solution for a project where I inference with a retrained tf2 image detection model on frames I pull from an OpenCV videocapture object. I’m happy to share all the routes that were not productive, but what did work was turning an OpenCV Mat into a tensor in code below.
The code I got to work was turning the Mat into a BufferedImage, then saving that to a JPEG onto the disk, reading the JPEG from the disk back into a tensor via the DecodeJpeg function. It works but is horrible.
How can I turn a BufferedImage
into a Tensor(Uint8)
directly without having to save and read a JPEG file for this problem?
SavedModelBundle model = SavedModelBundle.load(modelPath,"serve");
BufferedImage drawImg = BIfromMat(Mat aMat);
ImageIO.write(drawImg,"png",new File(svImgPath1));
try (Graph g = new Graph(); Session s = new Session(g)) {
Ops tf = Ops.create(g);
Constant<TString> fileName = tf.constant(svImgPath1);
ReadFile readFile = tf.io.readFile(fileName);
Session.Runner runner = s.runner();
DecodeJpeg.Options options = DecodeJpeg.channels(3L);
DecodeJpeg decodeImage = tf.image.decodeJpeg(readFile.contents(),options);
//fetch image from file
Shape imageShape = runner.fetch(decodeImage).run().get(0).shape();
//reshape the tensor to 4D for input to model
Reshape<TUint8> reshape = tf.reshape(decodeImage,
tf.array(1,
imageShape.asArray()[0],
imageShape.asArray()[1],
imageShape.asArray()[2]
)
);
try (TUint8 reshapeTensor = (TUint8)s.runner().fetch(reshape).run().get(0)) {
Map<String,Tensor> feedDict = new HashMap<>();
//The given SavedModel SignatureDef input
feedDict.put("input_tensor", reshapeTensor);
//gets the detected objects
Map<String, Tensor> outputTensorMap = model.function("serving_default").call(feedDict);