We are developing requirements for our product, including setting reliability requirements and its confidence levels or we’re setting acceptance criteria for our test plans. What confidence levels do we choose? We don’t have to blindly set them. We can base it off the risks of failure. I’ll tell you how after this brief intro.
Hello and welcome to Quality during Design, the place to use quality thinking to create products others love for less. My name is Dianna. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen in and then join the conversation at qualityduringdesign.com.
Before testing anything, we need to choose what confidence level we want to have in the results. We need to do this because there’s variation in everything in the way that we measure and test. The way that we manufacture introduces variation, including the raw materials that we, and the tools we’re using to make it. Setting a confidence level accounts for the variability we’re going to see in our test data. A confidence level is used in determining the sample size to test. If we want to make statements about how the design will perform in the field, then we need to test a sample size that statistically relevant, where we can use statistics to help us predict the performance in the field from a few tested in the lab.
Usually confidence levels are 90%, 95% or 99%. Why don’t we take the most conservative approach and just pick a 99% confidence level where that may save us time and having to think about it, it wastes a lot of time and resources later. The higher, the confidence level, the more likely we’ll need to test lots of samples. And that means making units for test, testing them all in the lab, and then having a more complex analysis.
Instead, one way we can choose a confidence level that we want for test is to correlate it with the risk of failure associated with it. Our product requirement is likely a control for a potential failure. What was the origin of our requirement? Why did we set it in the first place? What performance or characteristic of the final design is it controlling? If our product doesn’t meet this requirement, what are the ways that it can fail? If we have an FMEA, we can find the place in the table where our requirement is a control or where it’s associated with a potential failure mode and cause. When we find it, then we have a lot of metrics we can use to help us decide on a level of confidence to test based on risk. And if we’ve done our FMEA earlier, then we would’ve had it populated with information from our cross-functional team in a time of cool heads, without the pressures of project management. It will be an objective input into what confidence level we should require for our test.
What are the potential effects of this failure mode? In other words, what type of harm to the user, environment, or performance of the product is possible? Are there many effects listed or just one? If there are many effects, we may want a higher confidence level. What is the severity ranking of the effect? Is it high or is it low? The higher, the severity ranking, the more likely we should choose a higher confidence level. What other controls are in place besides our requirement? And what is the detection ranking? If this requirement is the only control, or if it’s the strongest control, then we may want to choose a higher confidence level. We could also use this information to justify a lower confidence level. If we have a requirement that’s associated with a failure that has one effect, that effect is not severe, and there are two other controls associated with that same cause, then maybe we’ll choose a lower confidence level.
What’s today’s insight to action, we should choose a confidence level for our requirements or their test plans. We can associate that confidence level with the level of risk of our product. FMEA is a great tool to refer to, to help us choose a relevant confidence level for our tests.
If this episode is helpful to you, I recommend two other previous Quality during Design episodes. Episode 27, How many Controls do we Need to Reduce Risk talks more about the controls that we use in an FMEA to control a risk. Episode 31, 5 Aspects of Good Reliability Goals and Requirements, talks about why we want a confidence level associated with our requirement.
Please go to my website at quality during design.com. You can visit me there and it also has a catalog of resources, including all the podcasts and their transcripts. Use the subscribe forms to join the weekly newsletter, where I share more insights and links in your podcast app. Make sure you subscribe or follow quality during design to get all the episodes and get notified when new ones are posted. This has been a production of Deeney Enterprises. Thanks for listening!