Tensorflow lite symmetric quantization

I noticed that when applying symmetric quantization (e.g. for kernels in Conv2D), there seems to be a small difference. Here is an example:

Kernel Quantization for 8 channels
-0.9445736408233643 ≤ 0.010435814969241619 * q ≤ 1.3253484964370728
-1.6590485572814941 ≤ 0.014620891772210598 * q ≤ 1.8568532466888428
-0.5196806788444519 ≤ 0.007325740065425634 * q ≤ 0.930368959903717
-0.8861835598945618 ≤ 0.019360164180397987 * q ≤ 2.4587409496307373
-1.0639452934265137 ≤ 0.014767976477742195 * q ≤ 1.8755329847335815
-7.1115498542785645 ≤ 0.05599645525217056 * q ≤ 3.182971239089966
-2.5442817211151123 ≤ 0.020033713430166245 * q ≤ 1.6315118074417114
-2.2797913551330566 ≤ 0.017951112240552902 * q ≤ 1.0806652307510376

Let’s consider 2 cases:

  1. Maximum side is larger (in an absolute sense) than the minimum. Say the 1st row. In this case, the slope is calculated as (1.3253484964370728/127) = 0.010435814969241619
  2. Minimum side is larger (in an absolute sense) than the maximum. Say the last row. In this case, the slope is still calculated as (2.2797913551330566/127) = 0.017951112240552902

In case 2, it looks like we are missing 1 step (-128). Should we not divide by 128 to get (2.2797913551330566/128) = 0.01776399. A small change but it seems like we are losing 1 quantization step in our calculation.

Hi @Gopal_Raghavan ,

In symmetric quantization, it’s common to use the range -127 to 127 instead of -128 to 127. This ensures symmetry around zero and avoids the issue of having an extra negative value that doesn’t have a positive counterpart.Using -128 to 127 would introduce an asymmetry, as there’s no +128 to balance -128. This could lead to biases in the quantized representations.

Thank you .