Autodiff compatibility required for Hamiltonian Monte Carlo?

I’d like to use the TensorFlow implementation of Hamiltonian Monte Carlo (HMC) with a biochemistry model I’ve written. When I try to sample from the likelihood distribution for the fit of this model to some data, I get an error traceback pointing at tensorflow_probability/python/math/gradient.py, tensorflow/python/eager/backprop.py, and so on.

The model code currently uses SciPy to integrate a system of ODEs, so it isn’t compatible with tf.GradientTape(). I thought that was OK because the API documentation only stipulates the argument target_log_prob_fn must be a “Python callable” and doesn’t mention anything about automatic differentiation. Do I need to rewrite my model to be compatible with TensorFlow autodiff?

1 Like

Hello @Jack_Elsey
Thank you for using TensorFlow,
For argument function target_log_prob_fn need to be defined which needs to take tensors as inputs and return a scalar tensor which represents the log probability and also for computing gradients use tf.GradientTape().

a generic code can be

def target_log_prob_fn(parameters):
    with tf.GradientTape() as tape:
        # Integrate your ODE system with TensorFlow
        log_prob = compute_log_likelihood(state, observed_data)
    return log_prob