8.
Assessing Product Reliability
8.3.
Reliability Data Collection
8.3.1.
How do you plan a reliability assessment test?
8.3.1.2.
|
Lognormal or Weibull tests
|
|
Planning
reliability tests for distributions other than the exponential is difficult
and involves a lot of guesswork
|
Planning a reliability test
is not simple and straightforward when the assumed model is lognormal or
Weibull. Since these models have two parameters, no estimates are possible
without at least two test failures, and good estimates require considerably
more than that. Because of censoring, without a good guess ahead of time
at what the unknown parameters are, any test plan may fail.
However, it is often possible to make a good guess ahead of time about
at least one of the unknown parameters - typically the "shape" parameter
( \(\sigma\)
for the lognormal or \(\gamma\)
for the Weibull). With one parameter assumed known, test plans can be derived
that assure the reliability or failure rate of the product tested will
be acceptable.
Lognormal Case (shape parameter known): The lognormal model is
used for many microelectronic wear-out failure mechanisms, such as electromigration.
As a production monitor, samples of microelectronic chips taken randomly
from production lots might be tested at levels of voltage and temperature
that are high enough to significantly accelerate the occurrence of electromigration
failures. Acceleration factors are known from previous testing and range
from several hundred to several thousand. |
Lognormal test plans, assuming sigma and
the acceleration factor are known
|
The goal is to construct a test plan (put \(n\)
units on stress test for \(T\) hours and accept the lot if no more than \(r\)
failures occur). The following assumptions are made:
-
The life distribution model is lognormal
-
Sigma = \(\sigma_0\)
is known from past testing and does not vary appreciably from lot to lot
-
Lot reliability varies because \(T_{50}\) values
(the lognormal median or 50th percentile) differ from lot to lot
-
The acceleration factor from high stress to use stress is a known quantity
"\(A\)"
-
A stress time of \(T\) hours is practical as a line monitor
-
A nominal use \(T_{50}\) of \(T_u\)
(combined with \(\sigma_0\))
produces an acceptable use CDF (or use reliability function). This is equivalent
to specifying an acceptable use CDF at, say, 100,000 hours to be a given
value \(p_0\)
and calculating \(T_u\)
via:
$$ T_u = 100,000 \,\, \mbox{exp }\left[-\sigma \, \Phi^{-1}(p_0)\right] $$
where \(\Phi^{-1}\)
is the inverse of the standard normal distribution
-
An unacceptable use CDF of \(p_1\)
leads to a "bad" use \(T_{50}\) of \(T_b\),
using the same equation as above with \(p_0\)
replaced by \(p_1\)
The acceleration factor \(A\)
is used to calculate a "good" or acceptable proportion of failures \(p_a\)
at stress and a "bad" or unacceptable proportion of fails \(p_b\):
$$ p_a = \Phi \left( \frac{\mbox{ln } (AT / T_u)}{\sigma_0} \right) , \,\,\,\,\,
p_b = \Phi \left( \frac{\mbox{ln } (AT / T_b)}{\sigma_0} \right) \, , $$
where \(\Phi\)
is the standard normal CDF. This reduces the reliability problem to a well-known
Lot Acceptance Sampling Plan (LASP) problem,
which was covered in Chapter 6.
If the sample size required to distinguish between \(p_a\) and \(p_b\)
turns out to be too large, it may be necessary
to increase \(T\) or test at a higher stress. The important point is that the
above assumptions and equations give a methodology for planning ongoing
reliability tests under a lognormal model assumption. |
Weibull test plans, assuming gamma and the acceleration. factor are known
|
Weibull Case (shape parameter known):
The assumptions and calculations are similar to those made for the lognormal:
-
The life distribution model is Weibull
-
Gamma = \(\gamma_0\)
is known from past testing and does not vary appreciably from lot to lot
-
Lot reliability varies because \(\alpha\) values
(the Weibull characteristic life or 62.3 percentile) differ from lot to lot
-
The acceleration factor from high stress to use stress is a known quantity "\(A\)"
-
A stress time of \(T\) hours is practical as a line monitor
-
A nominal use \(\alpha\) of \(\alpha_u\)
(combined with \(\gamma_0\))
produces an acceptable use CDF (or use reliability function). This is equivalent
to specifying an acceptable use CDF at, say, 100,000 hours to be a given
value \(p_0\)
and calculating \(\alpha_u\)
$$ \alpha_u = \frac{AT}{\left[ -\mbox{ln } (1-p_0) \right]^{1/\gamma_0} } $$
-
An unacceptable use CDF of \(p_1\)
leads to a "bad" use \(\alpha\) of \(\alpha_b\),
using the same equation as above with \(p_0\)
replaced by \(p_1\)
The acceleration factor \(A\)
is used next to calculate a "good" or acceptable proportion of failures \(p_a\)
at stress and a "bad" or unacceptable proportion of failures \(p_b\):
$$ p_a = 1 - \mbox{exp } \left[ -\left( \frac{AT}{\alpha_u} \right)^{\gamma_0} \right] , \,\,\,\,\,\,\,
p_b = 1 - \mbox{exp } \left[ -\left( \frac{AT}{\alpha_b} \right)^{\gamma_0} \right] \,\, . $$
This reduces the reliability problem to a
Lot Acceptance Sampling Plan (LASP) problem, which was covered in Chapter 6.
If the sample size required to distinguish between \(p_a\) and \(p_b\)
turns out to be too large, it may be necessary
to increase \(T\) or test at a higher stress. The important point is that the
above assumptions and equations give a methodology for planning ongoing
reliability tests under a Weibull model assumption.
Planning
Tests to Estimate Both Weibull or Both Lognormal Parameters |
Rules-of-thumb for general lognormal or Weibull
life test planning |
All that can be said here are some general rules-of-thumb:
-
If you can observe at least 10 exact times of failure, estimates are usually
reasonable - below 10 failures the critical shape parameter may be hard
to estimate accurately. Below 5 failures, estimates are often very inaccurate.
-
With readout data, even with more than 10 total failures, you need failures
in three or more readout intervals for accurate estimates.
-
When guessing how many units to put on test and for how long, try various
reasonable combinations of distribution parameters to see if the corresponding
calculated proportion of failures expected during the test, multiplied
by the sample size, gives a reasonable number of failures.
-
As an alternative to the last rule, simulate test data from reasonable
combinations of distribution parameters and see if your estimates from
the simulated data are close to the parameters used in the simulation.
If a test plan doesn't work well with simulated data, it is not likely
to work well with real data.
|