Hi @rolyan_trauts,
Sorry for delay in acknowledgement and thankyou for bringing this up. I am able to reproduce the issue as mentioned by you and you are right, the flex delegate flag is being silently ignored in newer Python versions [I tried with python 3.11] when using the suggested standard make command to build the LiteRT runtime.
Findings during the debugging
I hit a few series of issues while trying to get this work:
Firstly, the Python 3.11 is missing in the Docker image
The Makefile uses an Ubuntu 20.04 Docker image. When you ask for Python 3.11, it just breaks because 20.04 doesn’t have it in the standard list:
the logs - (needs a fix). the log -
E: Unable to locate package python3.11
E: Couldn't find any package by glob ‘python3.11’
Secondly, the logs keep saying “Nothing to Build”. Basically that build script will keep looking for the library in bazel-bin path, but the new build system is hiding it deep inside bazel-out, difficult for that script for findingit.
the log -
Target //tensorflow/lite/python/interpreter_wrapper:_pywrap_tensorflow_interpreter_wrapper up-to-date (nothing to build)
cp: cannot stat ‘bazel-bin/.../_pywrap_tensorflow_interpreter_wrapper.so’: No such file or directory
Lastly, I got a smaller size of build file.
My builds kept completing at a 14KB wheel size, we usually expect it to be of larger size of around 40MB to 100MB size. (ls -la) command output -
-rw-r–r-- 1 root root 14759 Dec 29 10:17 tflite_runtime-2.17.0-cp311-cp311-linux_x86_64.whl
While working on the solution I realised that even forcing the flags in command is not helping. This is that command where I tried to bypass the makefile with manual docker run with new forced paths -
docker run --rm -it \
--pid=host \
--volume $(pwd):/tensorflow_root \
--workdir /tensorflow_root \
-e PROJECT_NAME="tflite_runtime" \
-e PACKAGE_VERSION="2.17.0" \
-e CUSTOM_BAZEL_FLAGS="--define=tflite_pip_with_flex=true" \
tflite-runtime-builder-ubuntu-20.04 \
/bin/bash -c "
git config --global --add safe.directory /tensorflow_root && \
bazel build -c opt --config=monolithic --define=tflite_pip_with_flex=true //tensorflow/lite/python/interpreter_wrapper:_pywrap_tensorflow_interpreter_wrapper && \
STAGING_DIR=tensorflow/lite/tools/pip_package/gen/tflite_pip/python3/tflite_runtime && \
mkdir -p \$STAGING_DIR && \
find bazel-out -name '_pywrap_tensorflow_interpreter_wrapper.so' -exec cp {} \$STAGING_DIR/ \; && \
cp tensorflow/lite/python/interpreter.py \$STAGING_DIR/ && \
cd tensorflow/lite/tools/pip_package/gen/tflite_pip/python3 && \
python3 setup.py bdist_wheel"
Even with this I surprisingly saw the result remained the same. The compiled C++ binary (_pywrap_tensorflow_interpreter_wrapper.so) just not able to bundle correctly. The rootcause I found is because the .so engine is not in the right folder when setup.py runs, it just got left out.
So, as of now it seems the newer Makefile and build_pip_package_with_bazel.sh scripts are currently broken for Flex support. They can’t find the files made by the Bazel, and they don’t pass the flags we needed.
I also found a similar issue open on the TF GitHub repo where few community members are struggling with Flex delegate builds too: Issue #93828. We will continue to dig further on this issue..
Thanks,
Abhay