How Many Do We Need To Test?

In this episode we review sampling for design tests. We talk through a generic thought process for choosing a statistically relevant sample size and propose some basics that we can all learn about to better understand sampling.

Our goal is for us to be able to better talk through a sampling scenario with our quality and reliability engineering friends, and to better prepare for the information that they're going to want to know when asked, "How many do we need to test?"

The types of things we consider:

  1. What is our acceptance criteria?
  2. Is the data going to be continuous or discrete?
  3. How are we expecting this product to perform for this test? Do we have any historical data?
  4. How confident do we need to be in the result?
  5. What is the smallest difference we want to be able to notice?
  6. What is the test method? Is it validated? What is its precision and accuracy?

Even if we don't have all the answers, it's good to include quality and reliability engineers in sample size calculations. They may have input into the requirement or test method. This plus knowing that each requirement needs its own sampling study (like we did in this episode) shows that getting them involved with the iterative drafts of our test plan could be a benefit to the project.

Understanding the basics of hypothesis testing will set you up for a better understanding of many sampling methods. Any basic statistics book would provide a good introduction!