Hi,
Assume model
is a fully connected, 8-layer deep Keras sequential model I have pre-trained that takes in as input a two-dimensional tensor, [t,x]
.
I am interested in being able to obtain the partial derivative of this model with respect to the input component x
also as a sequential model (depending on the original model
and the graph corresponding to its gradient).
With the following code, I am able to visualize the graph of this model and partial derivative in Tensorboard:
def compute_derivative(model, inp):
with tf.GradientTape(persistent=True) as tape:
t, x = inp[:, 0:1], inp[:,1:2]
tape.watch(t)
tape.watch(x)
u = model(tf.stack([t[:,0], x[:,0]], axis=1))
u_x = tape.gradient(u, x)
return u_x
@tf.function
def traceme(x):
return get_residual(model, x)
logdir = "/tmp/logs"
writer = tf.summary.create_file_writer(logdir)
tf.summary.trace_on(graph=True, profiler=True)
traceme(tf.zeros((1, 2)))
with writer.as_default():
tf.summary.trace_export(name="trace", step=0, profiler_outdir=logdir)
And the output looks like this:
So I can see that theoretically this should be accessible somehow.
But given u_x
in the code, how do I backtrack in the sequential model that generated that partial derivative? I feel like this requires some graph magic I am not aware of.
Any pointers would be super helpful. Thank you so much!