uday
March 22, 2022, 6:09pm
1
I wanted to know what the policy/approach to support lowering for certain useful TF raw ops is: these currently aren’t in the MLIR TF dialect, and several higher-level abstractions are de-abstracted through these. As an example, the tensorflow addons package’s tfa.image.translate
lowers through the general “projective transformation” op (tf.ImageProjectiveTransformationV3), which can model combinations of rotation, scaling, skewing, translations, etc.) and is extremely useful to support for further optimization and code generation. I’ve added a lowering for this op from TF to lower level TF ops for a typical case (commit link below):
tensorflow:master
← polymage-labs:uday/projective_transformation_lowering
opened 06:07PM - 22 Mar 22 UTC
Add TF to TF lowering for projective image transformations modeled by
the tf.Im… ageProjectiveTransformV3 ops. Add this op to the TF dialect.
Lower projective transformations in the case of "translations" to pad +
slice ops.
Without such a lowering, the op otherwise fails conversion beyond the MLIR TF dialect. I’m assuming TF/MLIR is open to contributions to lowering such ops?
1 Like
Bhack
March 22, 2022, 6:15pm
2
Bhack
March 22, 2022, 6:18pm
3
We are also talking about this in some augmentation/preprocessing layers performance tickets:
opened 04:37PM - 10 Mar 22 UTC
type:feature
**System information**.
TensorFlow version (you are using):
master
Are you … willing to contribute it (Yes/No) :
I need more detail
**Describe the feature and the current behavior/state**.
I think that we need to cover core image processing transformation with TF native ops.
Currently a core transformation in preprocessing still rely on numpy/scipy impl.
https://github.com/keras-team/keras/blob/master/keras/preprocessing/image.py#L2622
Describe the feature clearly here. Be sure to convey here why the requested feature is needed. Any brief description about the use-case would help.
**Will this change the current api? How?**
**Who will benefit from this feature?**
**[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)**
- Do you want to contribute a PR? (yes/no):
- If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions
- Briefly describe your candidate solution(if contributing):
keras-team:master
← bhack:patch-2
> Yeah, this is a `vectorized_map` quality issue which unfortunately doesn't hav… e a quick fix. For stateless ops (e.g. `ImageProjectiveTransformV3`) we can gradually add `vectorized_map` rules for them. For stateful ops (e.g. `RandomUniformInt`, `RngReadAndSkip`) there really isn't anything `vectorized_map` can do other than fallback.
So I suppose that we will not have a sensible performance gain with all these fallback.
I have few extra points:
- Are `Fill` `Range` `Rescape` `Bitcast` etc.. fallback caused by the `vectorized_map` limited coverage like in `ImageProjectiveTransformV3` or are they caused by the random policy in these ops args?
- In the case they are part of a coverage limit how we need to handle these limits? Do you want that we open a ticket in the TF Github repository for each individual op?
- Is there a way to retrieve the list of covered ops? E.g. The [XLA list is not updated by years](https://github.com/tensorflow/tensorflow/issues/14798#issuecomment-1047796247) but I suppose that we could assume that the coverage is not the same as they are orthogonal.
- `ImageProjectiveTransformV3` is going to fallback with `vectorized_map` and fail with `jit_compile`. Do we need to open two separated tickets?
uday
March 22, 2022, 6:31pm
4
It’s the same op but my post isn’t about XLA proper or the TF → XLA support (yes, this isn’t supported on the TF → XLA path as well).
Bhack
March 22, 2022, 6:44pm
5
Yes sorry, it Is super hard to understand every day, for the avg contributor or end user, when and where MLIR is involved or not in a compiler path:
https://github.com/tensorflow/tensorflow/issues/53301#issuecomment-1005349596