Hey everyone. I’m new to this community and Tensorflow so kindly forgive me for being a little less technical. I am trying to do Variational Inference using TFP. A lot of work has been done on using divergences other than KL Divergence to perform VI. I noticed that there are methods in tfp.vi module where we have access to calculating Shannon Divergence and others as well, however they take a Tensor as input and not a Distribution Object as is the case if we calculate kl_divergence as p.kl_divergence(q), where p and q are distribution objects. My question is how can I use a different divergence in the same way as KLD is used between different distribution objects.
Thanks for reading this far.
TL;DR How to use a divergence measure other than KL to perform VI?