Tflite modelmaker object detection: normalizing bounding box values

I am new to TF lite object detection and have gone through the tutorial:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_object_detection.ipynb

and I think I understand it reasonably well. However, I notice that the bounding box is somehow normalised.

So, when I have my own dataset, each row looks like so (only 1 class:

``
TRAIN,/path/to/data//images/0.jpg,ONE_CLASS_CAT,270.5,115.10000000000036,787.3000000000011,483.3000000000011,


How does one normalise these coordinates values for tflite input?

Thank you!

@James_W Welcome to Tensorflow Forum!

Here’s how to normalize bounding box coordinates for TFLite object detection input:

  1. Normalization:
  • Normalization scales coordinates to values between 0 and 1, making them independent of image dimensions. Bounding boxes are typically represented as [xmin, ymin, xmax, ymax], where:
    • xmin: Top-left corner’s x-coordinate
    • ymin: Top-left corner’s y-coordinate
    • xmax: Bottom-right corner’s x-coordinate
    • ymax: Bottom-right corner’s y-coordinate
  1. Formula:
  • Divide each absolute coordinate by the corresponding image dimension:
    normalized_x = x_coordinate / image_width
    normalized_y = y_coordinate / image_height

Let us know if this helps!