Convert to tflite with esrgan fail in Android

1. System information

  • OS Platform and Distribution: macOS 12.2.1; Apple M1; MacBook Pro
  • TensorFlow installation : pip3 install tensorflow
  • TensorFlow library: 2.13.0

2. Code

Provide code to help us reproduce your issues using one of the following options:

Option A: Reference colab notebooks

  1. Reference TensorFlow Model Colab: Demonstrate how to build your TF model.
  2. Reference TensorFlow Lite Model Colab: Demonstrate how to convert your TF model to a TF Lite model (with quantization, if used) and run TFLite Inference (if possible).
convert tflite:  https://github.com/tensorflow/examples/blob/master/lite/examples/super_resolution/ml/super_resolution.ipynb

Option B: Paste your code here or provide a link to a custom end-to-end colab

test demo:  https://github.com/tensorflow/examples/tree/master/lite/examples/super_resolution

3. Failure after conversion

If the conversion is successful, but the generated model is wrong, then state what is wrong:

  • Model produces some errors

4. (optional) RNN conversion support

model is esrgan

5. (optional) Any other info / logs

case1. enable optimize like “tf.lite.Optimize.DEFAULT”, load model fail: Didn’t find op for builtin opcode ‘DEQUANTIZE’ version ‘5’
case2. disable optimize, set_shape 50x50, run fail msg: Something went wrong when copying input buffer to input tensor
case3. disable optimize, set_shape 640x360, run fail msg: signal 11 (SIGSEGV): stack pointer is in a non-existent map; likely due to stack overflow. function crash: SuperResolution.cpp->DoSuperResolution() line at: TfLiteInterpreterAllocateTensors

@frank_xu,

Welcome to the Tensorflow Forum,

Please refer to this example for the conversion of super resolution model to tflite.

In the above example, if your are looking to get different size image as output, you should modify the input shape while conversion accordingly.

Thank you!

@chunduriv Thank you for your reply.
I would like to know how to resize tensor in Android using C++.
like this or not, what is ‘input_dims_size’:

int index = 0;
int dimension[2] = {50, 60};
int input_size = 1;
TfLiteInterpreterResizeInputTensor(interpreter_, index, dimension, input_size);

@frank_xu,

The following examples show how to resize the input shape before running inference. The example assume that the input shape is defined as [1/None, 10], and need to be resized to [3, 10].

// Resize input tensors before allocate tensors
interpreter->ResizeInputTensor(/*tensor_index=*/0, std::vector<int>{3,10});
interpreter->AllocateTensors();

For more details please refer to टेन्सरफ्लो लाइट अनुमान  |  TensorFlow Lite.

Thank you!

Hello, @chunduriv
How to train SRGAN or ESRGAN to upscale x2?
since I only have x4 model now, should I retrain to support x2?