Hi @Siva_Sravana_Kumar_N
Below you will find an update of my exercise where you will see a MAF, Invert MAF (aka IAF) and a NVP bijectors. The output of the test is
- MAF seems to accept the x_ structure
- Invert MAF raise an error at the log_prob_ line
- NVP raise an error at the “model line”.
So if I understand your comment I wander why MAF is ok ???
But more importantly would be to give a solution to train with X data which are (N,2), here is what I am using so far
model.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-4),
loss=lambda _, log_prob: -tf.reduce_mean(log_prob)) # signature of loss fn(y_true,output_of_the_model)
hist = model.fit(x=X,
y=np.zeros((n_samples, 0), dtype=np.float32),
batch_size=BATCH_SIZE,
epochs=NEPOCHS,
shuffle=True,
verbose=0)
Thanks
####### The test on MAF, IAF and NVP ##########
base_dist = tfd.MultivariateNormalDiag(loc=tf.zeros([2], DTYPE),name='base dist')
x_ = tfkl.Input(shape=(2,), dtype=tf.float32)
flow_bijector_IAF = tfb.Invert(tfb.MaskedAutoregressiveFlow(name ='MAF',
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[512, 512], activation='relu')))
flow_bijector_MAF=tfb.MaskedAutoregressiveFlow( name ='MAF',
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[128, 128], activation='relu'))
flow_bijector_NVP=tfp.bijectors.RealNVP(
num_masked=1,
shift_and_log_scale_fn=tfb.real_nvp_default_template(
hidden_layers=[512,512]),
name='NVP'
)
def test(aBij):
name = aBij.name
try:
print(f">>>>>>>>> START {name} ")
trans_dist = tfd.TransformedDistribution(
distribution=base_dist,
bijector=aBij)
log_prob_ = trans_dist.log_prob(x_)
print(f"{name}:", log_prob_)
model = tfk.Model(x_, log_prob_)
print(f"<<<<<<<<< END {name} ")
except Exception as e:
print("Exception: " + str(e))
for aBij in [flow_bijector_MAF, flow_bijector_IAF, flow_bijector_NVP]:
test(aBij)
here is the output
>>>>>>>>> START MAF
MAF: KerasTensor(type_spec=TensorSpec(shape=(None,), dtype=tf.float32, name=None), name='tf.__operators__.add_165/AddV2:0', description="created by layer 'tf.__operators__.add_165'")
<<<<<<<<< END MAF
>>>>>>>>> START invert_MAF
Exception: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.int32, name=None), inferred_value=[2], name='tf.math.reduce_prod_8/Prod:0', description="created by layer 'tf.math.reduce_prod_8'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.
>>>>>>>>> START NVP
NVP: KerasTensor(type_spec=TensorSpec(shape=(None,), dtype=tf.float32, name=None), name='tf.__operators__.add_171/AddV2:0', description="created by layer 'tf.__operators__.add_171'")
Exception: The following are legacy tf.layers.Layers:
<keras.legacy_tf_layers.core.Dense object at 0x7f04081f00d0>
<keras.legacy_tf_layers.core.Dense object at 0x7f040817d150>
<keras.legacy_tf_layers.core.Dense object at 0x7f04081f0650>
<keras.legacy_tf_layers.core.Dense object at 0x7f0408105a50>
<keras.legacy_tf_layers.core.Dense object at 0x7f04081013d0>
<keras.legacy_tf_layers.core.Dense object at 0x7f0408168990>
To use keras as a framework (for instance using the Network, Model, or Sequential classes), please use the tf.keras.layers implementation instead. (Or, if writing custom layers, subclass from tf.keras.layers rather than tf.layers)