I’ve working with the example for Conditional GAN
It works great on grayscale images, however when I use dataset of RGB images it throws an error.
I’ve modified the variables related to the num_categories (6) and num_channels (3) to match my dataset, however there’s a problem with mismatched tensor sizes later on between the fake_image_and_labels and real_image_and_labels when concatenating.
Thanks in advance!
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_1421/662719055.py in <module>
8 )
9
---> 10 cond_gan.fit(dataset, epochs=20)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py in tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
/tmp/ipykernel_1421/3580991076.py in train_step(self, data)
47 fake_image_and_labels = tf.concat([generated_images, image_one_hot_labels], -1)
48 real_image_and_labels = tf.concat([real_images, image_one_hot_labels], -1)
---> 49 combined_images = tf.concat(
50 [fake_image_and_labels, real_image_and_labels], axis=0
51 )
ValueError: in user code:
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/tmp/ipykernel_1421/3580991076.py", line 49, in train_step
combined_images = tf.concat(
ValueError: Dimension 2 in both shapes must be equal, but are 7 and 9. Shapes are [28,28,7] and [28,28,9]. for '{{node concat_3}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](concat_1, concat_2, concat_3/axis)' with input shapes: [?,28,28,7], [?,28,28,9], [] and with computed input tensors: input[2] = <0>.