I am modifying `identity_3d`

which is initialized as an `n`

by `n`

by `n`

numpy array per the following operations:

```
identity_3d = np.zeros((n, n, n))
```

```
idx = np.arange(n)
identity_3d[:, idx, idx] = 1
I,J = np.nonzero(wR==0)
identity_3d[I,:,J]=0
identity_3d[I,J,:]=0
```

If `identity_3d`

was an Tensor instead, is there a way to perform the equivalent operation?

Bhack
June 26, 2021, 8:11pm
2
Do you have a complete numpy running example?

1 Like

Bhack
June 27, 2021, 12:24am
4
There is not a direct slice assigment for Tensor that maps the numpy syntax.
As you can see is currently not available also in the TF experimental numpy API:

But It Is a very frequent topic, take a look at:

opened 03:54AM - 08 Oct 19 UTC

stat:awaiting tensorflower
type:feature
comp:ops
TF 2.11

as in numpy or pytorch ,we can do someting like this, but how to do it with tf2.… 0.
the following code will raise exception as :
`'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment`
prediction[:,:,0]=tf.math.sigmoid(prediction[:,:,0])

opened 09:26PM - 07 Feb 20 UTC

closed 06:02PM - 12 Feb 20 UTC

stat:awaiting tensorflower
type:feature
comp:keras
TF 2.1

Suppose we have `x = K.zeros((4, 6))`, and we wish to add 1 to row 0: `x[0] += 1… `. The variable is created via `Layer`'s [`add_weight()`](https://github.com/keras-team/keras/blob/master/keras/engine/base_layer.py#L250) w/ `training=False`, so it isn't updated via backprop. What is the most _speed-efficient_ way to do so?
<hr>
**Context**: I'm implementing recurrent batch normalization, with `moving_mean` and `moving_variance` variables distinct for each timestep in an RNN - each thus having a shape of `(units, timesteps)`. The goal is to update one `timesteps` slice per step via `K.moving_average_update()`. One approach is as follows:
```python
import tensorflow.keras.backend as K
units, timesteps = 4, 6
x = K.zeros((units, timesteps), dtype='float32', name='x')
x_new = x[:units, 0].assign(K.ones((units,), dtype='float32')) # dummy example
K.set_value(x, K.get_value(x_new))
print(K.get_value(x))
```
```python
[[1. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0.]]
```
Looks good - except, a _new copy_ of `x` was created. In practice, we can have `timesteps > 100` (e.g. 120), so we are creating an array 120x larger than it needs to be, 120 times (1 / step), making it an `O(timesteps**2)` operation - as opposed to usual slicing, `O(timesteps)`.
Is there anything more efficient? Doesn't have to be `keras`, just at least `tf.keras`-friendly.

opened 09:28AM - 19 Jun 20 UTC

stat:awaiting tensorflower
type:feature
comp:ops

**System information**
- TensorFlow version (you are using): 2.2
- Are you wil… ling to contribute it (Yes/No): Yes
**Describe the feature and the current behavior/state.**
I would like to have slice assignment for Tensor objects in TensorFlow.
The code I would like to write is:
```python
import tensorflow as tf
a = tf.constant([1, 2, 4, 5, 7, 3, 2, 6,])
indices = tf.constant([3, 4], dtype=tf.int32)
a[indices] += 1
```
Of course it's a simplistic example and doesn't cover everything I want to do (I would use it in more complex functions not with constants), and I am happy to make it more complex if necessary.
Currently this code gives the error:
```
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Tensor: shape=(2,), dtype=int32, numpy=array([3, 4], dtype=int32)>
```
**Will this change the current api? How?**
I guess this is a change of API since it introduces a new functionality.
**Who will benefit with this feature?**
A lot of people have been asking for this feature for example in this GitHub issues:
- https://github.com/tensorflow/tensorflow/issues/14132#issuecomment-483002522
- https://github.com/tensorflow/tensorflow/issues/33131
These issues have unfortunately been closed because some workarounds for specific use-cases have been found (ones where the slicing is fixed and you can use [masking](https://github.com/tensorflow/tensorflow/issues/14132#issuecomment-483002522) or [TensorArrays](https://github.com/tensorflow/tensorflow/issues/14132#issuecomment-487643287)).
Some other issues deal with `Variable`s which is not what I am talking about here. [Some workarounds do exist](https://stackoverflow.com/a/62202181/4332585) involving `Variable` but they seem hacky.
I will personally benefit from it, in the multiple places where I now use `tensor_scatter_nd_add` or `tensor_scatter_nd_update`, which is solution that always works but is very difficult to write and very slow:
- [for a wavelet-based neural network, called MWCNN](https://github.com/zaccharieramzi/tf-mwcnn/blob/master/mwcnn.py#L106-L110);
- [for non-uniform fast fourier transform](https://github.com/zaccharieramzi/tfkbnufft/blob/master/tfkbnufft/nufft/interp_functions.py#L151);
- [for sensitivity map extraction when doing MRI reconstruction with TensorFlow neural networks](https://github.com/zaccharieramzi/fastmri-reproducible-benchmark/blob/master/fastmri_recon/data/utils/multicoil/smap_extract.py#L27-L35).
**Any Other info.**
The `tensor_scatter_nd_*` alternative might seem like a viable solution, but it suffers from 2 drawbacks that I consider huge:
- It is very difficult to write. It is actually so difficult, I decided to make a package that would alleviate this difficulty by having the different slicing possibilities unit tested: [tf-slice-assign](https://github.com/zaccharieramzi/tf-slice-assign).
- It is very slow. I made a [benchmark notebook](https://colab.research.google.com/drive/1gEjha7h1mhQkFwULS9MAU0bWQfzfEALY?usp=sharing) vs `pytorch` for slice assignment add. You can see that on GPU, using `tensor_scatter_nd_add` is 10 times slower than slice assignment in `pytorch` and 20 times slower on CPU. For a practical example, it means that my `tfkbnufft` (for non-uniform fast fourier transform) package is 30 times slower than its [torch counterpart](https://github.com/mmuckley/torchkbnufft#computation-speed) which I translated. This currently removes the possibility of training neural networks using the non-uniform fourier transform in TensorFlow.

1 Like