Does plugable devices also support FPGAs?
Hi Chris,
Sorry for the late reply! We have talked through another channel but I’ll post here too for other’s info:
If the FPGA code can connect to TensorFlow through C API, it should work. Here are the overall steps:
- Create a PluggableDevice.
- Write your custom TensorFlow kernels/ops and register them to TensorFlow through the kernel and op registration C API. (We also extended it to support
ResourceVariable
ops recently.) - Use the StreamExecutor C API for device execution and memory management.
- If you’d like to do graph optimization, your plug-in can register a custom graph optimization pass through the graph optimization C API.
- We are also looking into a TF Profiler C API for PluggableDevices.
Here’s a tutorial and example code under construction.
I should also add that PluggableDevice is focused on TensorFlow’s current runtime stack. It may require some migration efforts to work with the new runtime stack.
Best,
Penporn