Hi everyone, I’m a newbie in tensorflow. Currently I’m using tensorflow 2.3.0, my project need to reinitialize the input pipeline regularly, but each time I reinitialize my input sometimes the memory increase rapidly and OOM. Anyone meet this issue before?
Below is what my input pipeline looks like:
train_holder,label_holder = tf.placeholder(tf.float32, [None, 32, 32, 3],name = “train_holder”), tf.placeholder(tf.int32, [None],name = “label_holder”)
input_tuple = (self.train_holder, self.label_holder)
ds = tf.data.Dataset.from_tensor_slices(data)
map_fn = lambda x, y: (cifar_process(x, is_train), y)
train_dataflow = ds.map(map_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_ds = train_dataflow.repeat().batch(
self.batch_size, drop_remainder=True
).map(
autoaug_batch_process_map_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE).prefetch(
buffer_size=tf.data.experimental.AUTOTUNE)
train_input_iterator = (
self.strategy.experimental_distribute_dataset(
train_ds).make_initializable_iterator())
Each time I reinitialize this pipeline with new input by feedict:
self.sess.run([train_input_iterator.initializer],feed_dict = …)
Sometimes I get OOM.