innat
1
Is it possible to decode file on GPU during training a model? Resizing, Rescaling etc can be done as part of a model.
with tf.device('/GPU:0'):
tf.io.decode_*
model = Sequential(
[
ImageReader(),
ImageResizer(),
ImageNetModel()
...
]
)
Reference. NVIDIA Developer Data Loading Library (DALI) | NVIDIA Developer
Bhack
2
I don’t think that currently we have a GPU decoding.
We had a thread with preprocessing + decoding at:
https://github.com/keras-team/keras-cv/pull/146#issuecomment-1048128659
We have also discussed something for Video:
https://github.com/tensorflow/io/issues/840
Another emerging approach is:
RGB no more: Minimally-decoded JPEG Vision Transformers
2 Likes
Does nvidia dali suit your usecase? I have not used it but could be something.
Also if using distributed strategies see these experimental options:
@tf_export("distribute.InputOptions", v1=[])
class InputOptions(
collections.namedtuple("InputOptions", [
"experimental_fetch_to_device",
"experimental_replication_mode",
"experimental_place_dataset_on_device",
"experimental_per_replica_buffer_size",
])):
...
Perhaps experimental_place_dataset_on_device
does what you are looking for