I am currently working on a hybrid quantum-classical neural network(in quantum machine learning). The classical part of the NN is defined using TensorFlow and I actually need it to update the parameters of the quantum circuit as well. Due to this I can not use .fit()
method(because I have a layer that cannot be defined in TensorFlow).
Now, for this I need to do back propagation using Gradient tape method. So the code does normal back propagation, define the weights explicitly, do forward propagation, calculate the loss, compute the gradient and finally do updation of weights. The problem lies in the part where I do calculation of gradient.
epochs = 100
learning_rate = 0.001
input_data = batch_images_grayscale
target = batch_labels_grayscale
target = target[:, np.newaxis]# the classical model
model = keras.models.Sequential([
keras.layers.Conv2D(64, (3, 3), activation='relu'), keras.layers.MaxPooling2D((2, 2)), keras.layers.Conv2D(64, (3, 3), activation='relu'), keras.layers.MaxPooling2D((2, 2)), keras.layers.Flatten(), keras.layers.Dense(10, activation="softmax"), keras.layers.Dense(1, activation="sigmoid")
])
# this is to initalize the weights of the model's layer
model(batch_images_grayscale)
# defining the weight matrix
weights = model.get_weights()
weight_shape =for i, weight in enumerate(weights):
weight_shape.append(weight.shape)random_weights =
for shape in weight_shape:
random_weight = np.random.randn(*shape)
random_weights.append(random_weight)weights = random_weights
tf_weights = [tf.Variable(weight, dtype=tf.float32, trainable=True) for weight in weights]
weights = tf_weights
# the array which will contain the loss of the whole model
loss_model =# Manual Backpropagation
for epoch in range(epochs):
with tf.GradientTape() as tape:
tape.watch(weights)
# Forward pass
model.set_weights(weights)
predictions = model(input_data)
loss = tf.keras.losses.binary_crossentropy(target, predictions)
# Compute gradients
gradients = tape.gradient(loss, weights)print(gradients) # Update weights for i in range(len(weights)): weights[i].assign_sub(learning_rate * gradients[i]) # Set the updated weights to the model model.set_weights(weights)
The issue lies in the tape.gradient(loss, weights)
part. This return none when I input the TensorFlow variable weights
. And if I input model.trainable_variables
this code works fine. Looking into weights
and model.trainable_variables
they both work same. The reason I can not do anything with model.trainable_variables
is because I will have to define weights explicitly using a numpy array for the quantum layer.
How do I resolve this? Thanks for your time.