I would like to know the device placements of operators from a model retrieved from tensorflow hub.
I think the code will explain better: test.py
import tensorflow as tf
import tensorflow_hub as hub
tf.debugging.set_log_device_placement(True)
layer = hub.KerasLayer('https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1')
model = tf.keras.Sequential([layer])
When I run the script, it does print some device placement, but nothing related to any relevant operations.
I can get a list of all unique operators on a unix system using:
Hello @lgusm, thank for taking the time to read my question.
So I am working on a tensorflow plugin for a device and I need real-world models to drive the development.
Here is my workflow has been looking like so far:
My first focus was on supporting Resnet50 operators, which is a widely available model and I have an implementation that doesnât come from the hub.
I made a simple python script that loads the model and I activated tf.debugging.set_log_device_placement(True) option, this gives me a list of all the kernels I need to support for supporting Resnet50.
Now I would like to support other real-world tensorflow models, and tensorflow hub is a good place to find them.
However when I load a model from tensorflow hub like the RetinaNet in the example above, I donât see any relevant kernel (I would like to see Conv2D for example).
I understand that a tensorflow hub model encapsulate the model as a KerasLayer, but at the lower level I assume that the Conv2D and other operators/kernels will be executed. I would like to retrieve what kernels/operators a model from tensorflow hub is going to execute.
Let me know if my question is better understandable now.
IDK how device placement works with SavedModel (hub just wraps SavedModel).
But try having a look at:
# load the saved_model
mod = hub.load('https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1')
# Get the graph out of the default signature and convert it to a graph_def proto.
mod.signatures['serving_default'].graph.as_graph_def()
Awesome, I can extract the list of operators from this. Thank you!
By the way, is there any documentation about âgraph_defâ?
I am currently able to extract the list of op by converting the graph_def to string and extracting the info from it, but I am sure I could extract the relevant information directly from it.
Is it an explanation of why the device placement logging âdoesnât workâ for operators within hub models? (if so then I am deducing that hub models are loaded inside tf.function)