Thank you for bringing this issue to our attention and as far my understanding to specify the half-precision floating-point data type in TensorFlow 2.x onwards, please use tf.float16 instead of tf.compat.v1.lite.constants.FLOAT16 which is used for quantizing the model to 16-bit floating-point format during conversion in TensorFlow Lite
Please refer to updated official documentation of Post-training float16 quantization and I see you’re referring to old Tensorflow-Lite Blog it seems like outdated so will have internal discussion with TensorFlow Lite Team about this and if possible will try to update that outdated blog as soon as possible.