I build a BNN model similar to the Keras example, which accounts for aleatory and epistemic uncertainties:
However, I want to compute the uncertainties separately. In an outer loop I compute several predictions, each sampling new weights and biases their distribution functions. Between these predictions I compute the epistemic (model) uncertainty.
For the determination of the aleatory uncertainties I use a tfp.layers.IndependentNormal() layer, through which the model returns a distribution object. Since my data is normalized, I can’t just use the .stdv() method, since the inverse transformation would not return the correct inverse normalized outputs (this would only be the case for .mean()). Instead, I need to sample from the distribution object, then inverse normalize the samples, and afterwards I can calculate my standard deviation in the initial range.
Now I’m not sure if the .sample() method e.g. for n=100 samples → .sample(100) samples 100 times new model parameters of the distribution function of the weights and biases. That would contaminate the aleatory uncertainty with the epistemic uncertainty.
I also saw that .sample() has a seed attribute, but am not completely sure if this prevents re-sampling of the model parameter distribution function for computations on a GPU.