Hi,
I am new to the TensorFlow-MLIR domain and want to extract a MLIR (no specific dialect) from a small model written using TF. I was able to convert the GraphDef into a Textual MLIR using the experimental APIs (tf.mlir.experimental.convert_graph_def | TensorFlow v2.16.1). Is there any other way to do so somehow without explicitly using these APIs (maybe while running the Python script)? Thanks
@Rajan_Singh
When running your Python script, set the environment variable TF_DUMP_GRAPH_PREFIX
to a path where you want to save the intermediate GraphDef. Then, TensorFlow will dump the GraphDef to a file at that location.
For example:
import os
import tensorflow as tf
# Set the environment variable
os.environ['TF_DUMP_GRAPH_PREFIX'] = '/path/to/dump'
# Build and run your TensorFlow model here
# Now, you'll find the GraphDef at /path/to/dump
This way, you can avoid explicitly calling the experimental API and still get your hands on the MLIR.
Thanks @BadarJaffer , but I want to save the MLIR instead of the GraphDef. Is there any way for doing that?
@Rajan_Singh
You can use the tf.autodiff.experimental.to_mlir
function to convert your model to MLIR. Here’s a snippet to help you out:
import os
import tensorflow as tf
# Set the environment variable for MLIR dump path
os.environ['MLIR_DUMP_PATH'] = '/path/to/dump.mlir'
# Build and run your TensorFlow model here
# Convert the model to MLIR and save it
mlir_code = tf.autodiff.experimental.to_mlir(your_model)
with open('/path/to/dump.mlir', 'w') as f:
f.write(mlir_code)
This should do the trick for saving the MLIR directly.
Gives “AttributeError: module ‘tensorflow._api.v2.autodiff’ has no attribute ‘experimental’” for TF v2.15.0. They might have removed the experimental API.
@Rajan_Singh For TF v2.15.0 and beyond, you can use the following approach without the experimental API:
import os
import tensorflow as tf
from tensorflow.compiler.mlir.mlir_export import to_mlir
Set the environment variable for MLIR dump path
os.environ[‘MLIR_DUMP_PATH’] = ‘/path/to/dump.mlir’
Build and run your TensorFlow model here
Convert the model to MLIR and save it
mlir_code = to_mlir(your_model)
with open(‘/path/to/dump.mlir’, ‘w’) as f:
f.write(mlir_code)
Give this updated code a shot, and it should work
They removed this API as well. Instead, they have tensorflow.compiler.mlir.stablehlo, .lite and .quantization modules. No clue what actually these do.
@Rajan_Singh
Let’s take a different route using the stable HLO (High-Level Optimizer) module.
import os
import tensorflow as tf
from tensorflow.compiler.mlir.tensorflow import to_mlir
# Set the environment variable for MLIR dump path
os.environ['MLIR_DUMP_PATH'] = '/path/to/dump.mlir'
# Build and run your TensorFlow model here
# Convert the model to MLIR and save it
mlir_code = to_mlir(your_model)
with open('/path/to/dump.mlir', 'w') as f:
f.write(mlir_code)
This should work for the newer versions of TensorFlow. The tensorflow.compiler.mlir.tensorflow
module provides a stable interface for exporting MLIR.
No such module/function in mlir.tensorflow. Anyways, thanks for the replies.