Reliability Analysis under Parameter Uncertainty - A Fully Bayesian Perspective

Incorporating epistemic uncertainty has been a subject of many recent research. I’d like to show a method that is in line with the Jayensian view of probability.

The core idea of reliability analysis is in computing the probability of failure of a system. For example, suppose that we have a system with random capacity \(R\) and load \(S\). Note again that \(R\) and \(S\) are random variables with some distributions. We define failure as when the load is larger than the capacity (\(S>R\)). Thus, we ask ourself what is the probability of \(S>R\). More generally, the question is usually written as: \[p_f = \Pr(R-S < 0) .\]

where \(p_f\) is the probability of failure.

There are a lot of resources on the above problem already. The issue is that we often (read, almost always) do not know the “true” distributions of \(R\) and \(S\), so we must employ some statistical estimation or fitting.

The Problem

Most reliability analysis will compute the probability of failure on the basis that the selected distributions for \(R\) and \(S\), and the esimated distribution parameters are the true value. To illustrate, suppose that there is a true value for the parameters

In the Bayesian view, the parameters are random, i.e.,

Some work has delineated the randomness due to the parameter uncertainty as “epistemic”, while the randomness from the random variables \(R\) and \(S\) as “aleatoric”.

In this post, I will restrict the discussion to normal distribution for \(R\) and \(S\), with known standard deviations \(\sigma_R\) and \(\sigma_S\). The only unknowns are the means \(\mu_R\) and \(\mu_S\). This will allow us to obtain the exact probability of failure, and also use the First Order Reliability Method (FORM) commonly used in more complicated problems.

Bayesian Estimation

Posterior Predictive Distribution

Reliability Analysis

Conclusions

Closing Remarks

It is important to realize that all probabilities are conditional on something (as is the Jayensian view of probability). There is no such notion as absolutely “correct” probability of failure. This is to say that the works done that does not consider epistemic uncertainty are not necessarily wrong; only that the probability of failure published in those works are conditional on the zero epistemic uncertainty. This conditional seems to be the norm, and I think most of our decision making using reliability analysis (e.g., specified target reliability index) are within this zero epistemic uncertainty “model universe”. Therefore, even though what I have shown (and many other works on the same topic) is that probability of failure is larger when considering epistemic uncertainty, it is not cause for alarm as the “model universe” is different to those work that considers no epistemic uncertainty. To even further this point, consider the almost unlimited amount of different model that we could have picked, e.g., different distributions for \(R\) and \(S\), different definition of failure (i.e., the limit state function), and so on. And as a final note, does nature even distinguish epistemic and aleatoric uncertainty (besides the quantum effects)? or is it just a product of our model building?