MediaPipe: massive accuracy loss with quantization-aware training

When I run through the object detection training example for MediaPipe here (making no changes): Object detection model customization guide  |  Google AI Edge  |  Google for Developers, I run into an issue when doing the quantization-aware training. After training, the evaluation shows a good AP:

Validation coco metrics: {'AP': 0.8832744, 'AP50': 1.0, 'AP75': 1.0, 'APs': -1.0, 'APm': -1.0, 'APl': 0.8832744, 'ARmax1': 0.9013889, 'ARmax10': 0.9013889, 'ARmax100': 0.9013889, 'ARs': -1.0, 'ARm': -1.0, 'ARl': 0.9013889}

The next section performs quantization-aware training, which, according to the notes, only performs fine-tuning to the model. However, the accuracy (AP) drastically falls to unacceptable levels:

QAT validation coco metrics: {'AP': 0.0055725495, 'AP50': 0.024442662, 'AP75': 0.00024944355, 'APs': -1.0, 'APm': -1.0, 'APl': 0.0055725495, 'ARmax1': 0.01875, 'ARmax10': 0.08263889, 'ARmax100': 0.17291667, 'ARs': -1.0, 'ARm': -1.0, 'ARl': 0.17291667}

Any suggestions on how to use QAT to maintain good AP? I expect some decay in accuracy during quantization, but 88% to 0.6% seems like way too much of a drop.

I faced with the same issue. Similarly, QAT accuracy also dropped for the official Colab object detector sample (with Android figurine images). Any solution to this ?

Hello, thanks for reporting.

Anyone can give some reproducible script and exact model you used?
This kind of problem might be painful to debug, but I can take a look when I have time.

cc. @battery

Hi @tucan.dev,

It’s the object_detector.SupportedModels.MOBILENET_MULTI_AVG model. If you run through the notebook given in the tutorial here, you should be able to replicate the issue.

1 Like