How to get LLVM IR from XLA tfcompile

Hi,

I am using XLA AOT to compile my model (using tf_library macro). All works well and the .h and object files are generated. Now I would like to get LLVM IR that was internally generated by tfcompile but I can not find a way to do so. Neither I can find any flag for the tfcompile that would allow me to get the LLVM IR.

Is there anyway to get the LLVM when using XLA AOT and tfcompile?

Thanks

Have you tried setting the ENVS in:

OpenXLA Project ?

Hi Bhack

Thanks for the tip.

Yes I tried that, but did not work. It seems that that may work when you are using JIT (not AOT) in your tensorflow program (python), at least that is how I understand that example.

In my case I am using AOT and the tf_library macro. So I use bazel buil to call the macro and in this case I do not know how to pass the XLA_FLAGS. tf_library has an input option called flags but XLA_FLAG is not one of those. I did set the XLA_FLAG env variable, just in case, but did not work (don’t think that bazel will read linux env variables).

I am literally following this in order to the the graph compiled

In short, after preparing the frozen_graph, creating the graph.config.pbtxt and updating the BUILD with the tf_library macro info you call to:

bazel build --show_progress_rate_limit=600 @org_tensorflow//:graph

That works, the header file and the cc_library is generated but I can not get the llvm IR. And do not know how to pass in this case the XLA_FLAG

Any ideas? Maybe there is other way to use AOT and being able to pass that flag.

Thanks

I don’t know if this is still available or substituded with something else:

Yes I tried that too.

In the latest master branch the parameter is called --xla_dump_to

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc

But it did not work. However I am not sure if I am passing the parameter correctly. What I did was this

bazel build --show_progress_rate_limit=100 --xla_dump_to='mypath' @org_tensorflow//:graph

But the option is unrecognized [ERROR]

Maybe is just that that is not the correct way to pass that option (certainly looks like that). But I do not know how pass the parameter. Do you have any idea?

Thanks again for your help!

Have you already tried also passing it like in tfcompile_flags:

Yeap I also tried that but it did not work either. I also took a look to the flags.cc file

and it did not look like that --xla_dump_to is supported. So I can see that the cpu_compiler.cc seems is contemplating that flag but then I am banging my head against the wall to provide it :exploding_head:

I’ve recompiled TF on a fresh master version:

bazel build tensorflow/compiler/aot:tfcompile

export XLA_FLAGS="--xla_hlo_profile --xla_dump_to=/tmp/foo --xla_dump_hlo_as_text" ;

bazel-bin/tensorflow/compiler/aot/tfcompile --graph=./tensorflow/compiler/aot/test_graph_tfadd.pbtxt --config=tensorflow/compiler/aot/test_graph_tfadd.config.pbtxt --cpp_class="myns::test"
ls -1 /tmp/foo/

1627458376191776.module_0000.tfcompile.8.before_optimizations.txt
1627458376191776.module_0000.tfcompile.8.cpu_after_optimizations-buffer-assignment.txt
1627458376191776.module_0000.tfcompile.8.cpu_after_optimizations.txt
execution_options.txt
module_0000.tfcompile.8.buffer_assignment
module_0000.tfcompile.8.ir-no-opt-noconst.ll
module_0000.tfcompile.8.ir-no-opt.ll
module_0000.tfcompile.8.ir-with-opt-noconst.ll
module_0000.tfcompile.8.ir-with-opt.ll
module_0000.tfcompile.8.o

It seems to work, so with bazel have you tried to set the envs with:

https://bazel.build/designs/2016/06/21/environment.html#proposed-solution

1 Like

Hi Bhack,

Calling to tfcompile through the BUILD macro, still did not work with the llvm flag, even after following the doc from bazel that you provided, to pass the env_variables. (still quite new with Blaze to maybe its me here to blame not sure)

Anyway I followed your example calling directly tfcompile and that DID work. I can see the lovely ll files.

So thank you very much for your help and for the detail explanation on how to get the llvm dump.

Did you use something like bazel build --action_env=XLA_FLAGS="--xla_hlo_profile --xla_dump_to=/tmp/foo --xla_dump_hlo_as_text" ...?

Hi Bhack,

Yeap, that is what I did
Also tried by exporting first the variable with the value and then just passing --action_env=XLA_FLAGS but also did not work.
Anyhow I got it working, but not using the BUILD macro but by calling directly tfcompile (as in your example)

@markdaoust Is this still supported using bazel and not documented?

I don’t know the details.

But have you seen the experimental_get_compiler_ir method on the GenericFunction class returned by tf.function? Is there maybe a way that it could help here?

I think this is at runtime jit and not for aot, right?

If you know someone that is working on this area It could be nice to know how users could use this with bazel build and add the command in that Doc page.

Yes.

You can do anything in a bazel genrule but hopefully there’s a cleaner option somewhere.

If you know someone that is working on this area

I’ll try.

1 Like

Hi,
I have tried to follow the code in this thread, however, it no longer works. I believe its because I am unable to build tfcompile.

I have tried many different ways with no luck. I can however use XLA-AOT using Bazel. How do I go about adding the flags to dump the LLVM?

Could I also ask if there is a way to make the LLVM hardware agnostic, or at least agnostic to the specific CPU but still using the CPU backend?