How to Implement Deconvolution (10,10,128) to (320,320,3)

I 'm a novice, I 'm using tensorflow = = 2.5.0, I now want to implement a deconvolution network from data shape ( 10,10,128 ) to ( 320,320,3 ), my code is as follows :

def generator(values_dim):
    input_values_dim = Input(shape=(values_dim,))
    x = Dense(units=256, activation='relu')(input_values_dim)
    x = Dense(units=1024, activation='relu')(x)
    x = Dense(10 * 10 * 128,activation='relu')(x) 
    x = keras.layers.Reshape([10, 10, 128])(x)
    x = keras.layers.Conv2DTranspose(128, 4, strides=2, padding="SAME",activation="relu")(x)
    generator = Model(input_values_dim, x)
    return generator
Generator = generator(opt.latent_dim)
Generator.summary()

But I received the following error:

WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.nn.conv2d_transpose_1), but
are not present in its tracked objects:
  <tf.Variable 'conv2d_transpose_1/kernel:0' shape=(4, 4, 128, 128) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.nn.bias_add_1), but
are not present in its tracked objects:
  <tf.Variable 'conv2d_transpose_1/bias:0' shape=(128,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
Model: "model_6"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_4 (InputLayer)            [(None, 10)]         0                                            
__________________________________________________________________________________________________
dense_9 (Dense)                 (None, 256)          2816        input_4[0][0]                    
__________________________________________________________________________________________________
dense_10 (Dense)                (None, 1024)         263168      dense_9[0][0]                    
__________________________________________________________________________________________________
dense_11 (Dense)                (None, 12800)        13120000    dense_10[0][0]                   
__________________________________________________________________________________________________
tf.compat.v1.shape_2 (TFOpLambd (2,)                 0           dense_11[0][0]                   
__________________________________________________________________________________________________
tf.__operators__.getitem_3 (Sli ()                   0           tf.compat.v1.shape_2[0][0]       
__________________________________________________________________________________________________
tf.reshape_1 (TFOpLambda)       (None, 10, 10, 128)  0           dense_11[0][0]                   
                                                                 tf.__operators__.getitem_3[0][0] 
__________________________________________________________________________________________________
tf.compat.v1.shape_3 (TFOpLambd (4,)                 0           tf.reshape_1[0][0]               
__________________________________________________________________________________________________
tf.__operators__.getitem_4 (Sli ()                   0           tf.compat.v1.shape_3[0][0]       
__________________________________________________________________________________________________
tf.stack_1 (TFOpLambda)         (4,)                 0           tf.__operators__.getitem_4[0][0] 
__________________________________________________________________________________________________
tf.nn.conv2d_transpose_1 (TFOpL (None, 20, 20, 128)  0           tf.reshape_1[0][0]               
                                                                 tf.stack_1[0][0]                 
__________________________________________________________________________________________________
tf.nn.bias_add_1 (TFOpLambda)   (None, 20, 20, 128)  0           tf.nn.conv2d_transpose_1[0][0]   
__________________________________________________________________________________________________
tf.nn.relu_1 (TFOpLambda)       (None, 20, 20, 128)  0           tf.nn.bias_add_1[0][0]           
==================================================================================================
Total params: 13,385,984
Trainable params: 13,385,984
Non-trainable params: 0

I’m not sure what caused this, and I haven’t achieved my final goal. How can I get an output with a shape of (320,320,3)? Please help me, thank you!!

In addition, I found another interesting thing, when I use the following code in the tf = 2.5.0 version, I will not receive a warning, I do not know what will be the gap between the use of Model and sequential construction of the network, can explain it for me?

Generator2 = keras.models.Sequential([
    keras.layers.Dense(1024, input_shape=[opt.latent_dim], activation="relu"),
    keras.layers.Dense(5024, activation="relu"),
    keras.layers.Dense(10 * 10 * 128,
                       activation=keras.layers.LeakyReLU(alpha=0.2)),
    keras.layers.Reshape([10, 10, 128]),
    keras.layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding="SAME",
                                 activation=keras.layers.LeakyReLU(alpha=0.2)),
    keras.layers.Conv2DTranspose(128, kernel_size=4, strides=4, padding="SAME",
                                 activation=keras.layers.LeakyReLU(alpha=0.2)),
    keras.layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding="SAME",
                                 activation=keras.layers.LeakyReLU(alpha=0.2)),
    keras.layers.Conv2DTranspose(3, kernel_size=4, strides=2, padding="SAME",
                                 activation="tanh"),
])
Generator2.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 1024)              11264     
_________________________________________________________________
dense_1 (Dense)              (None, 5024)              5149600   
_________________________________________________________________
dense_2 (Dense)              (None, 12800)             64320000  
_________________________________________________________________
reshape_13 (Reshape)         (None, 10, 10, 128)       0         
_________________________________________________________________
conv2d_transpose_8 (Conv2DTr (None, 20, 20, 128)       262272    
_________________________________________________________________
conv2d_transpose_9 (Conv2DTr (None, 80, 80, 128)       262272    
_________________________________________________________________
conv2d_transpose_10 (Conv2DT (None, 160, 160, 128)     262272    
_________________________________________________________________
conv2d_transpose_11 (Conv2DT (None, 320, 320, 3)       6147      
=================================================================
Total params: 70,273,827
Trainable params: 70,273,827
Non-trainable params: 0

Hi @wuhaibin833 & Welcome to the Tensorflow forum.
Several people reported such issue back in May 2021 when v2.5 got released. If you search online, you’ll find many instances when issue was not reproducible.
More than 2 years later after this release, can’t you just upgrade your Tensorflow version?
Thank you.

Thank you for your suggestion, but I am currently using tf=2.5 and have not considered upgrading. Additionally, I am more concerned about the cause of this issue and how to solve it without upgrading tf version

Hi @wuhaibin833.
I do believe you can safely ignore this warning message (not error message).
To get rid of it, something like the following shall make it:

import tensorflow as tf
tf.get_logger().setLevel('ERROR')

That being said, to make it easier for people to go deeper into the issue you are facing, can you please share a minimal reproducible example in Colab?
Thank you.