Android and python tf lite inference results are different given same image

I get different results when running my model with the TF Lite interpreter in Python and on Android. I do the same normalization in both of those and checked that the input images are the same.

The Python code:

def read_image(file_path):
    mean = 255*np.array([0.485, 0.456, 0.406])
    std = 255*np.array([0.229, 0.224, 0.225])
    img = Image.open(file_path).convert('RGB') 
    img = img.resize((224,224), resample=PIL.Image.BILINEAR) 
    img = np.array(img)
    img = (img - mean[None, None, :]) / std[None, None, :]
    img = np.float32(img)
    return img

img = read_image_normal(path)

interpreter = tf.lite.Interpreter(model_path="./model.tflite") 
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

interpreter.set_tensor(input_details[0]['index'], processed_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_details[0]['index'])

Snippets of the relevant Android code:
For preprocessing:

  private fun preprocessImage(image1: Bitmap) = with(TensorImage(DataType.FLOAT32)) {
      load(image1)
      val imageProcessor: ImageProcessor = ImageProcessor.Builder()
              .add(
              ResizeOp(
                  224,
                  224,
                  ResizeOp.ResizeMethod.BILINEAR
              )
          )
          .add(
              NormalizeOp(
                  floatArrayOf(
                          0.485f*255f,
                          0.456f*255f,
                          0.406f*255f,
                  ),
                  floatArrayOf(
                          0.229f*255f,
                          0.224f*255f,
                          0.225f*255f,
                  ),
              )
          )

          .build()
      imageProcessor.process(this)
  }

For inference:

val options = Interpreter.Options()
options.setNumThreads(1)
interpreter = Interpreter(model, options)
val image = preprocessImage(bitmap)
val probabilityBuffer = TensorBuffer.createFixedSize(intArrayOf(1, 1), DataType.FLOAT32)
interpreter.run(image.buffer, probabilityBuffer.buffer)
val score = probabilityBuffer.floatArray[0]

Both results are different by quite a lot - around 0.1 score. The images that enter are exactly the same.

Model: model.tflite - Google Drive