Hi,
I’m running TF lite with EfficientDet-Lite0 models on an intel-based windows computer. I’m receiving inference times of around 8 seconds whereas I get inference times of <100ms on mobile platforms.
Is this expected?
Thanks
Hi,
I’m running TF lite with EfficientDet-Lite0 models on an intel-based windows computer. I’m receiving inference times of around 8 seconds whereas I get inference times of <100ms on mobile platforms.
Is this expected?
Thanks
Yeah, default kernels are not optimized for intel CPU.
You can try to build TF lite 1with XNNPACK:
Thanks @sx_f @Bhack , I fixed it by switching on RUY. Inference time went from 8seconds to 30ms. Surprised it’s not ON by default.
Wait isn’t ruy also optimization for ARM CPU?