Requires_output_quantize from tfmot is not working as expected

Hi,

I am working with tfmot to quantize some specific layers in my model. First, I did not annotate my specific layers. I do not understand why in this code here model-optimization/tensorflow_model_optimization/python/core/quantization/keras/quantize.py at e38d886935c9e2004f72522bf11573d43f46b383 · tensorflow/model-optimization · GitHub we check on NOT isinstance rather than isinstance. Because, I think if we use isinstance then the layer is of type QuantizeAnnotate and we will push it to requires_output_quantize as the name suggest? Coming to _quantize now, why do we need to verify if it is not in requires_output_quantize model-optimization/tensorflow_model_optimization/python/core/quantization/keras/quantize.py at e38d886935c9e2004f72522bf11573d43f46b383 · tensorflow/model-optimization · GitHub as this variable will hold layers that do not need to be quantized. I am confused and this sounds contradictory for me.

NB: I have re-implemented the _quantize() function like so:

def _quantize(layer):  # pylint: disable=missing-docstring
        if (
            (layer.name not in layer_quantize_map)
            or (isinstance(layer, quantize_wrapper.QuantizeWrapper))
            or issubclass(type(layer), QuantizeLayer)
        ):
            # It supports for custom QuantizeWrapper.
            print(f"Layer is {layer.__class__}")
            return layer

        if layer.name in requires_output_quantize:
            if not quantize_registry.supports(layer):
                return layer
            full_quantize_config = quantize_registry.get_quantize_config(layer)
            if not full_quantize_config:
                return layer
            quantize_config = qat_conf.OutputOnlyConfig(full_quantize_config)
        else:
            quantize_config = layer_quantize_map[layer.name].get("quantize_config")
            if not quantize_config and quantize_registry.supports(layer):
                quantize_config = quantize_registry.get_quantize_config(layer)

        if not quantize_config:
            error_msg = (
                "Layer {}:{} is not supported. You can quantize this "
                "layer by passing a `tfmot.quantization.tf.keras.QuantizeConfig` "
                "instance to the `quantize_annotate_layer` "
                "API."
            )
            raise RuntimeError(
                error_msg.format(layer.name, layer.__class__, quantize_registry.__class__)
            )

        quantize_config = copy.deepcopy(quantize_config)
        return quantize_wrapper.QuantizeWrapperV2(layer, quantize_config)

I removed layer.name not in requires_output_quantize in the first if statement and it does work for me. But still do not understand why how this could work in general?

Thanks,