The 8 bit quantizitation as requirement for google coral is an issue – that makes sense for image’s and text characters (keyboard=8bit), but in my opinion an intelligent structure is in the minimal case a tuple, like two 4 bit (6,8) or (0,5) or better a 8 bit tuple (127,65) or (17,120). That is how i learned about artificial intelligence… a tupel is a minimal link of information to another information (neural network)…
Maybe some problems do not need this “minimal link”, i don’t know exactly.
(maybe image layer to layer acceleration is without link between each pixel-byte (8bit))?
My question is how to make such a tuple like in python or C:
int a = 14
int b = 17
int result = (b<<4) | a
printf(result)
, because the accuracy of 32 bit is very important for my application. I cannot lose it, i would loose to much…like artificial intelligence makes little sense then on this high precision data. Because the error is very small.
I need to bitwise split the 32 bit to 4 * 8 bit int. And then bitwise merge. But it is not possible (as i know) without tuples like (1,2)(2,3)(3,4) to create the structure. Without that structure the 4*8bit int are independent and useless for my application. They must be connected… How on coral api???
There is a method
pycoral.utils.dataset.
read_label_file
(file_path )
taking entries like 0:cat, 1:dog, 2:snake, 3:imageclass
…maybe it is possible to classify the int8 values as 0:first 1:sesond 2:third 3:fourth?
The edge api is very disappointing…
on a to high “level”…proud of quantizitaion … but in practice… there is no such framework/method to optimize and use the quantizitation for optimization… Anyone feels the same, what i feel?
The final question is: Did google failed with it’s 8bit quantizitation, as a nonsense feature?
Did nvidea simply better?
I think like “quantizitation sounds good, but must be fully implemented”, and it isn’t…
When this limitation is true…google can call it “circumcised api for bloody beginners(don’t use it for professional tasks, you will fail!!!)”