This is a sort of a three part question. Trying to convert my computer vision application with tfjs-node to use serverless off-line, it’s working. However, each time the function is invoked, it’s printing out
Platform node has already been set. Overwriting the platform with [object Object].
cpu backend was already registered. Reusing existing backend factory.
The kernel '_FusedMatMul' for backend 'cpu' is already registered
The kernel 'Abs' for backend 'cpu' is already registered
The kernel 'Acos' for backend 'cpu' is already registered
The kernel 'Acosh' for backend 'cpu' is already registered
...
The kernel 'Unpack' for backend 'tensorflow' is already registered
The kernel 'UnsortedSegmentSum' for backend 'tensorflow' is already registered
The kernel 'ZerosLike' for backend 'tensorflow' is already registered
tensorflow backend was already registered. Reusing existing backend factory.
Which is fine,
Q1:
If it thinks it’s already loaded, is there a way to check and avoid printing them out every time?
Q2:
If it thinks tfjs-node is loaded, is there a way to check if the model has been loaded in subsequent function invocation to avoid having to call loadSavedModel again?
Q3:
If needs to load in a different model with loadSavedModel, is there a way to flush the model in memory before calling loadSavedModel?
Thank you