<em>Please make sure that this is a feature request. As per our [GitHub Policy](â€¦https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>
**System information**
- TensorFlow version (you are using): 2.0.0a0
- Are you willing to contribute it (Yes/No): Maybe, if there are any pointers on how to go about implementing something like this.
**Describe the feature and the current behavior/state.**
Currently the `tf.linalg` operations seem to only support dense tensors. I have verified `tf.linalg.solve`. Consider the following variables
```
A = tf.sparse.SparseTensor(
indices=[[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2]],
values=[3., 2., -1., 2., -2., 4., -1., 0.5, -1.],
dense_shape=(3, 3)
)
b = tf.sparse.SparseTensor(
indices=[[0, 0], [1, 0], [2, 0]],
values=[-1., 2., 0.],
dense_shape=(3, 1)
)
```
now if I do
```
tf.linalg.solve(A, b)
```
this errors out with
```
ValueError: Attempt to convert a value (<tensorflow.python.framework.sparse_tensor.SparseTensor object at 0x7f28a1e3fc50>) with an unsupported type (<class 'tensorflow.python.framework.sparse_tensor.SparseTensor'>) to a Tensor.
```
However, if I do
```
tf.linalg.solve(tf.sparse.to_dense(A), tf.sparse.to_dense(b))
```
it works, giving
```
<tf.Tensor: id=11, shape=(3, 1), dtype=float32, numpy=
array([[-1.0000006],
[ 2.0000014],
[ 2.0000012]], dtype=float32)>
```
which is roughly the correct answer. Example taken from the [scipy sparse matrix factorized](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.factorized.html) documentation.
**Will this change the current api? How?**
Instead of erroring out, the `linalg` operations will work for sparse inputs just as well as for dense inputs.
**Who will benefit with this feature?**
I came across this while implementing [NVIDIA's FastPhotoStyle](https://github.com/NVIDIA/FastPhotoStyle) in TensorFlow 2.0 + Keras.
The algorithm essentially has 2 steps,
- A `PhotoWCT` transformation that is `WCT` but uses maxpooling argvalues as unpooling masks
- A Photorealistic smoothing step that restores geometric artifacts for objects distorted using regular style transform
- An additional post processing step that applies a GPU based smoothing filter for further reducing structural defects
The NVIDIA implementation provides a [`photo_wct` net implementation in PyTorch](https://github.com/NVIDIA/FastPhotoStyle/blob/master/photo_wct.py). However for the second step, [a CPU implementation in scipy is used](https://github.com/NVIDIA/FastPhotoStyle/blob/master/photo_smooth.py). The second step essentially solves a closed form system of equations.
Consider [line 48](https://github.com/NVIDIA/FastPhotoStyle/blob/master/photo_smooth.py#L48), what it is essentially doing is creating a diagonal matrix of size (width x height) of the image. So for a 512x512 image, in dense form it will have to create a matrix of 2^36 floats.
The matrix is later used to solve a system of equations (https://github.com/NVIDIA/FastPhotoStyle/blob/master/photo_smooth.py#L52).
2^36 tf.float32 values will occupy 2^8 = 256GB of memory, which will definitely overflow. Ideally, I should be able to work with tensorflow's `linalg` module just the same as the scipy implementation. An additional flexibility with TensorFlow is that it will place operations on GPUs automatically. Additionally, I can put it in a keras `Lambda` layer to add it to a model.
**Any Other info.**
No