Hello Community,
I have used tensorflow==2.15.0 and python3.10 to build docker image and deployed it on AWS lambda.
When I invoke the lambda, the lambda fails with the following error for first 3-4 times and then works fine.
2024-03-07 15:53:17.126612: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-07 15:53:17.126667: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-03-07 15:53:17.127595: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-03-07 15:53:17.971503: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Am not sure what these errors mean and I spent 2 days trying to fix this, no success so far. But during this research I found these errors are somehow related to GPU and my personal laptop is not GPU. Tested it locally and everything works fine. It fails only on AWS lambda with the above mentioned errors.
Following is my dockerfile
FROM public.ecr.aws/lambda/python:3.10
check our python environment
RUN python --version
RUN pip --version
Copy function code
COPY requirements/requirements-prod.txt ${LAMBDA_TASK_ROOT}/requirements.txt
COPY src/handler.py ${LAMBDA_TASK_ROOT}/handler.py
COPY src/ ${LAMBDA_TASK_ROOT}/src
RUN python3.10 -m pip install -r requirements.txt --target “${LAMBDA_TASK_ROOT}”
Set the CMD to your handler
CMD [“handler.predictor”]
Any help is appreciated. Thanks.