Hi Folks,

I’m a newbie when it comes to TensorFlow Probability and could use some expert advice / guidance. I’d like to understand the math behind log_prob for a normal distribution with batch_shape=[3] and event_shape = [2]. When 3x2 matrix input is passed to log_prob, how is log_prob calculating final output as shape (3,). I understand the batch result but what is log_prob doing with 2 elements per each batch item? I was interested in learning the math behind this function.

-Dipesh

1 Like

For a MultivariateNormal distribution of dimension (event_shape) N, the samples are vectors in N-dimensional Euclidean space. `log_prob`

called on one such vector `x`

will yield a single scalar – the log of the probability density of the MVN at that `x`

. If your mean vector is `m`

and covariance matrix is `C`

, this log_prob is `-1/2 (x - m)^T C^{-1} (x - m) - 1/N log 2π|C| `

(pretty sure i got that right). More info here: Multivariate normal distribution - Wikipedia. Hopefully this answers some of your question! I know you said you were ok w/ batch_shape, but it may still be worth reading through this tutorial.

I came to search this topic because Tensorflow website uses this example `tfd.log_prob(0.)`

and I was confused can you put 0 into log, like Log (0.)? And the website has not mentioned anything else. Now, it’s not until I see this comment saying, it’s the Log (density function value when x=0).

Such little thing, one-liner explanation, can really save everyone tons of time on searching, but unfortunately, such lack of explanation on the key step is little too omnipresent

1 Like