Reliability/Sample Size Calculation Based on Bayesian Inference:

Bayesian

I have written about sample size calculations many times before. One of the most common questions a statistician is asked is “how many samples do I need – is a sample size of 30 appropriate?” The appropriate answer to such a question is always – “it depends!”

In today’s post, I have attached a spreadsheet that calculates the reliability based on Bayesian Inference. Ideally, one would want to have some confidence that the widgets being produced is x% reliable, or in other words, it is x% probable that the widget would function as intended. There is the ubiquitous 90/90 or 95/95 confidence/reliability sample size table that is used for this purpose.

90-95

In Bayesian Inference, we do not assume that the parameter (the value that we are calculating like Reliability) is fixed. In the non-Bayesian (Frequentist) world, the parameter is assumed to be fixed, and we need to take many samples of data to make an inference regarding the parameter. For example, we may flip a coin 100 times and calculate the number of heads to determine the probability of heads with the coin (if we believe it is a loaded coin). In the non-Bayesian world, we may calculate confidence intervals. The confidence interval does not provide a lot of practical value. My favorite explanation for confidence interval is with the analogy of an archer. Let’s say that the archer shot an arrow and it hit the bulls-eye. We can draw a 3” circle around this and call that as our confidence interval based on the first shot. Now let’s assume that the archer shot 99 more arrows and they all missed the bull-eye. For each shot, we drew a 3” circle around the hit resulting in 100 circles. A 95% confidence interval simply means that 95 of the circles drawn contain the first bulls-eye that we drew. In other words, if we repeated the study a lot of times, 95% of the confidence intervals calculated will contain the true parameter that we are after. This would indicate that the one study we did may or may not contain the true parameter. Compared to this, in the Bayesian world, we calculate the credible interval. This practically means that we can be 95% confident that the parameter is inside the 95% credible interval we calculated.

In the Bayesian world, we can have a prior belief and make an inference based on our prior belief. However, if your prior belief is very conservative, the Bayesian inference might make a slightly liberal inference. Similarly, if your prior belief is very liberal, the inference made will be slightly conservative. As the sample size goes up, impact of this prior belief is minimized. A common method in Bayesian inference is to use the uninformed prior. This means that we are assuming equal likelihood for all the events. For a binomial distribution we can use beta distribution to model our prior belief. We will use (1, 1) to assume the uninformed prior. This is shown below:

uniform prior

For example, if we use 59 widgets as our samples and all of them met the inspection criteria, then we can calculate the 95% lower bound credible interval as 95.13%. This is assuming the (1, 1) beta values. Now let’s say that we are very confident of the process because we have historical data. Now we can assume a stronger prior belief with the beta values as (22,1). The new prior plot is shown below:

22-1 prior

Based on this, if we had 0 rejects for the 59 samples, then the 95% lower bound credible interval is 96.37%. A slightly higher reliability is estimated based on the strong prior.

We can also calculate a very conservative case of (1, 22) where we assume very low reliability to begin with. This is shown below:

1-22 Prior

Now when we have 0 rejects with 59 samples, we are pleasantly surprised because we were expecting our reliability to be around 8-10%. The newly calculated 95% lower bound credible interval is 64.9%.

I have created a spreadsheet that you can play around with. Enter the data in the yellow cells. For a stronger prior (liberal), enter a higher a_prior value. Similarly, for a conservative prior, enter a higher b_prior value. If you are unsure, retain the (1, 1) value to have a uniform prior. The spreadsheet also calculates the maximum expected rejects per million value as well.

You can download the spreadsheet here.

I will finish with my favorite confidence interval joke.

“Excuse me, professor. Why do we always calculate 95% confidence interval and not a 94% or 96% interval?”, asked the student.

“Shut up,” explained the professor.

Always keep on learning…

In case you missed it, my last post was Mismatched Complexity and KISS:

2 thoughts on “Reliability/Sample Size Calculation Based on Bayesian Inference:

  1. I had a reader ask me about choosing the a and b values for the beta distribution. I am pasting past of my email correspondence below:

    The beauty of beta distribution is that it allows us to choose appropriate prior values. The mean for the beta distribution is a/a+b, the two values of beta function. Thus, we if we have a prior belief that the “average” quality is 95%, then we can choose the a and b values to reflect this. I can use the a and b values to be 22 and 1. The mean is calculated as .9565. If I am 50% sure, then I can use (1, 1), where the mean is calculated as 0.5. This is the “uninformative prior” where we are saying it is 50/50. On the other hand, if we have a prior belief that, the quality is very poor (5%), then I can use a and b values as (1, 22). The mean comes out to be (0.0435).

    The really neat thing about Bayesian analysis is that you can use the prior belief to your advantage. If we have a very high confidence that a lot is very very good (99%), we can use a and b values as (100, 1) with the mean being .990099. Now if we tested 30 samples and found no rejects, we can say that the reliability is at least 97.7% at 95% confidence level. If you notice, the Bayesian analysis brought you away from your “liberal” assumption of 99%. You can try the same with a and b values as (1, 100). You thought that the quality was about 1%, and you tested 30 samples with 0 rejects. The Bayesian analysis shows that the reliability is actually at least 17.8% at 95% confidence level. Thus, you need to reevaluate your assumption. As you test very high samples, the impact of your prior values will become very low.

    The value of Bayesian analysis is if you have to test a smaller sample size and you wanted to make an educated guess (like explained in the example above). The frequentist approach does not help us much here. Bayesians do not use p-values or confidence intervals.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s