Does .sample(n) of a Tensorflow Probability model repeatedly call the model parameters?

I build a BNN model similar to the Keras example, which accounts for aleatory and epistemic uncertainties:

However, I want to compute the uncertainties separately. In an outer loop I compute several predictions, each sampling new weights and biases their distribution functions. Between these predictions I compute the epistemic (model) uncertainty.

For the determination of the aleatory uncertainties I use a tfp.layers.IndependentNormal() layer, through which the model returns a distribution object. Since my data is normalized, I can’t just use the .stdv() method, since the inverse transformation would not return the correct inverse normalized outputs (this would only be the case for .mean()). Instead, I need to sample from the distribution object, then inverse normalize the samples, and afterwards I can calculate my standard deviation in the initial range.

Now I’m not sure if the .sample() method e.g. for n=100 samples → .sample(100) samples 100 times new model parameters of the distribution function of the weights and biases. That would contaminate the aleatory uncertainty with the epistemic uncertainty.

I also saw that .sample() has a seed attribute, but am not completely sure if this prevents re-sampling of the model parameter distribution function for computations on a GPU.

Hi @MeiPau

Thank you for using TensorFlow,
For Aleatoric uncertainty use After Bayesian inference, Use each weight sample to retrain the model on test data and then compute the standard deviation of predictions across these samples.

For Epistemic case using parameter samples from posterior, For each parameter draw, retrain the model using only that weight on training data after this compute the standard deviation of predictions across independent validation sets.
This way we can distinguish by using parameter samples.