the traditional reliability prediction models do not work - is there
by Michael Pecht
While it is generally believed that reliability
prediction methods should be used to aid product design and product
development, the integrity and auditability of the traditional prediction
methods have been found to be questionable, in that the models do
not predict field failures, cannot be used for comparative purposes,
and present misleading trends and relations. This paper presents
a historical overview of reliability predictions for electronics,
discusses the traditional reliability prediction approaches, and
then presents an effective alternative which is becoming widely
Figure 1 - Physics of Failure
What Is The Historical Perspective For Reliability
Prediction of Electronics
Product and system reliability emerged
as an identified engineering discipline in the late 1940's. This
does not suggest that engineers and designers did not always strive
for reliable designs. Engineers have naturally designed and operated
equipment to "succeed" and they typically did so by providing a
margin of strength over the anticipated loads (stresses). For example,
in 1860, A. Wohler presented some of the earliest fatigue failure
information, which occurred on stagecoach and railroad axles. The
S-N (applied load versus cycles to failure) diagrams, which resulted
from Wohler's work, were used to identify the load condition (called
a fatigue limit) below which "no failures" should be expected.
Reliability engineering for electronics started
with the establishment of the Ad Hoc Group on Reliability of Electronic
Equipment on December 7, 1950, and the subsequent Advisory Group
on the Reliability of Electronic Equipment (AGREE) formed by the
US Department of Defense in 1952. One of the first reliability handbooks
was titled "Reliability Factors for Ground Electronic Equipment"
published in 1956 by McGraw-Hill under the sponsorship of the Rome
Air Development Center (RADC). This publication contained information
on design considerations, human engineering, interference reduction,
and a section on reliability mathematics. Failure prediction was
only mentioned as a topic under development.
Reliability prediction for electronics is traced
to November 1956 with publication of the RCA release TR-ll00, titled
"Reliability Stress Analysis for Electronic Equipment," which presented
models for computing rates of component failures. This was also
the first formal publication in which the concept of activation
energy and the Arrhenius relationship were used in modeling electronic
component failure rates. This publication was followed by the "RADC
Reliability Notebook" on October 30, 1959; a report titled "Reliability
Applications and Analysis Guide," by D. R. Earles of the Martin
Company, in September 1960; and a report titled "Failure Rates,"
by D. R. Earles and M. F. Eddins, of AVCO Corporation, in April,
In December 1965, the U. S. Navy introduced
the first reliability prediction handbook for electronics, MIL-HDBK-217A.
In this first version, there was only a single point failure rate
of 0.4 failures per million hours for all monolithic integrated
circuits, regardless of the materials, the design, the manufacturing
processes or the life cycle condition (environment and usage). This
single-valued failure rate was illustrative of the infancy of the
reliability models, and the fact that accuracy was less of a concern
than standardization to the U. S. Department of Defense (*).
In July 1973, RCA proposed a prediction model
for microcircuits, based on previous work by the Boeing Aircraft
Company. The proposed model consisted of two additive portions:
one reflecting a steady-state-temperature-related failure rate,
and the second a mechanical-related failure rate. It was clear to
RCA researchers, that any reliability model should reflect design,
device and fabrication techniques, manufacturing, materials, and
geometries. Unfortunately, this attitude was not shared by the RADC,
and the model was greatly simplified in-house by modeling the device
reliability with a pair of complexity factors, and assuming an exponential
failure distribution during the operational life of the device.
This model was then published as MIL-HDBK-217B under the preparing
activity of the Air Force. The exponential distribution assumption
still remains in many handbooks today, in spite of overwhelming
evidence suggesting that it is often not appropriate .
Rapid improvements and increased complexity
of microelectronic devices pushed the application of MIL-HDBK-217B
beyond reason. A good example was the limitations of the early models
to address a 64K RAM. In fact, when the RAM model was extrapolated
to include at that time the common 64K capability, the resulting
mean time between failures was 13 seconds .
As a result of this type of incident, on April 9, 1979 MIL-HDBK-217C
was published to "band-aid" the problems. Although MIL-HDBK-217C
was updated to MIL-HDBK-217D on January 15, 1982, to MIL-HDBK-217E
on October 27, 1986, and to MIL-HDBK-217F 
in December 1991, the handbook could never keep up-to-date with
the technology advances; the field data took too long to collect
and the models were not based on sound engineering fundamentals.
In the last version of the document, two teams
were under contract to provide guidelines for this update. The IIT
Research Institute/Honeywell SSED team developed reliability models
for CMOS, VHSIC, and VHSIC- like devices, and the University of
Maryland (CALCE Electronic Products and Systems Center) /Westinghouse
team developed reliability models for advanced technology microelectronic
devices to include high gate count devices such as VHSIC, VLSI,
and complex packaging approaches such as surface mount, ASIC, and
hybrids. Both teams suggested:
- that the constant failure rate model not
- that some of the individual wearout failure
mechanisms (i.e., electromigration and time-dependent dielectric
breakdown) be modeled with a lognormal distribution;
- that the Arrhenius type formulation of
the failure rate in terms of temperature should not be included
in the package failure model; and
- that stresses such as temperature change
and humidity be considered. In particular, both the IIT/Honeywell
study and the University of Maryland/Westinghouse study noted
that temperature cycling was becoming more detrimental to component
reliability than the steady-state temperature at which the device
is operating, so long as the temperature is below a critical value.
This conclusion was further supported by a National Institute
of Standards and Technology (NIST) study ,
and an Army Fort Monmouth 
study, which stated that the influence of steady-state temperature
on microelectronic reliability under typical operating changes
was being inappropriately modeled by an Arrhenius relationship.
Reliance on MIL-HDBK-217 proved costly. For
example, the use of MIL-HDBK-217 upfront in the design process,
had initially led to design decisions maximizing the junction temperature
in the F22 Advanced Tactical Fighter electronics to 60°C and in
the Comanche Light Helicopter to 65°C. In fact, 125°C might have
been acceptable and could have resulted in substantial improvements
in life cycle cost, weight, volume, support, and reliability. Furthermore,
cooling temperatures as low as -40°C at the electronic's rails were
at one time required to obtain the specified junction temperatures;
the resulting temperature cycles are known to precipitate many unique
Problems with the Traditional Approach to
Problems that arise with the traditional
reliability prediction methods and some of the reasons these problems
exist are described below .
1) Up-to-date collection of the pertinent reliability
data needed for the traditional reliability prediction approaches
is a major undertaking, especially when manufacturers make yearly
improvements. Most of the data used by the traditional models is
out-of-date. For example, the connector models in MIL-HDBK-217 have
not been updated for at least 10 years, and were formulated based
on data 20 years old.
Nevertheless, reliance on even a single outdated
or poorly conceived reliability prediction approach can prove costly
for systems design and development. For example, the use of military
allocation documents (JIAWG), which utilizes the MIL-HDBK-217 approach
upfront in the design process, initially led to design decisions
maximizing the junction temperature in the F-22 advanced tactical
fighter electronics to 60°C and in the Comanche light helicopter
to 65°C. Boeing noted that, "The System Segment Specification normal
cooling requirements were in place due to military electronic packaging
reliability allocations and the backup temperature limits to provide
stable electronic component performance. The validity of the junction
temperature relationship to reliability is constantly in question
and under attack as it lacks solid foundational data."
For the Comanche, cooling temperatures as low
as -40°C at the electronic's rails were at one time required to
obtain the specified junction temperatures; even though the resulting
temperature cycles were known to precipitate standing water as well
as many unique failure mechanisms. Slight changes have been made
in these programs when these problems surfaced, but scheduling costs
cannot be recovered.
2) In general, equipment removals and part
failures are not equal. Often field removed parts are re-tested
as operational (called re-test OK, or fault-not-found, or could-not
duplicate) and the true cause of "failure" is never determined.
As the focus of reliability engineering has been on probabilistic
assessment of field data, rather than on failure analysis, it has
generally been perceived to be cheaper for a supplier to replace
a failed subsystem (such as a circuit card) and ignore how the card
3) Many assembly failures are not component-related
but due to an error in socketing, calibration or instrument reading
or due to the improper interconnection of components during a higher
level assembly process. Today, reliability limiting items are much
more likely to be in the system design (such as misapplication of
a component, inadequate timing analysis, lack of transient control,
stress-margins oversights), than in a manufacturing or design defect
in the device.
4) Failure of the component is not always due
to a component-intrinsic mechanism but can be caused by: (i) an
inadvertent over-stress event after installation; (ii) latent damage
during storage, handling or installation after shipment; (iii) improper
assembly into a system; or (iv) choice of the wrong component for
use in the system by either the installer or designer. Variable
stress environments can also make a model inadequate in predicting
field failures. For example, one Westinghouse fire control radar
has been used in a fighter aircraft, a bomber, and on the top mast
of a ship, each with its unique configuration, packaging, reliability
and maintenance requirements.
5) Electronics do not fail at a constant rate,
as predicted by the models. The models were originally used to characterize
device reliability because earlier data was tainted by equipment
accidents, repair blunders, inadequate failure reporting, reporting
of mixed age equipment, defective records of equipment operating
times, mixed operational environmental conditions. The totality
of these effects conspired to produce what appeared to be an approximately
constant hazard rate. Further, earlier devices had several intrinsic
failure mechanisms which manifested themselves as several subpopulations
of infant mortality and wear-out failures resulting in a constant
failure rate. The above assumptions of constant failure rate do
not hold true for present day devices.
6) The reliability prediction models are based
upon industry-average values of failure rates, which are neither
vendor- nor device-specific. For example, failures may come from
defects caused by uncontrolled fabrication methods, some of which
were unknown and some of which were simply too expensive to control
(i.e. the manufacturer took a yield loss, rather than putting more
money to control fabrication). In such cases, the failure was not
representative of the field failures upon which the reliability
prediction was based.
7) The reliability prediction was based upon
an inappropriate statistical model. For example, a failure in a
lot of radio-frequency amplifiers was detected at Westinghouse in
which the insulation of a wire was rubbed off against the package
during thermal cycling. This resulted in an amplifier short. X-ray
inspection of the amplifier during failure analysis confirmed this
problem. The fact that a pattern failure (as opposed to a random
failure) existed under the given conditions, proved that the original
MIL-HDBK-217 modeling assumptions were in error, and that either
an improvement in design, improved quality, or inspection was required.
8) The traditional reliability prediction approaches
can produce what are likely to be highly variable assessments. As
one example, the predicted reliability, using different prediction
handbooks, for a memory board with 70 64k DRAMS in a "ground benign"
environment at 40°C, varied from 700 FITS to 4,240,460 FITS. Overly
optimistic predictions may prove fatal. Overly pessimistic predictions
can increase the cost of a system (e.g., through excessive testing,
or a redundancy requirement), or delay or even terminate deployment.
Thus, these methods should not be used for preliminary assessments,
baselining or initial design tradeoffs.
An Alternative Approach: Physics-of-Failure
Many of the leading US commercial electronics
companies have now abandoned the traditional methods of reliability
prediction. Instead, they use reliability assessment techniques,
that are based on the root-cause analysis of failure mechanism,
failure modes and failures causing stresses. This approach, called
physics-of-failure, has proven to be effective in the prevention,
detection, and correction, of failures associated with design, manufacture
and operation of a product.
The physics-of-failure (PoF) approach to electronics
products, is founded on the fact that failure mechanisms are governed
by fundamental mechanical, electrical, thermal, and chemical processes.
By understanding the possible failure mechanisms, potential problems
in new and existing technologies can be identified and solved before
The PoF approach begins within the first stages
of design (see Figure 1). A designer defines the product requirements,
based on the customer's needs and the supplier's capabilities. These
requirements can include the product's functional, physical, testability,
maintainability, safety, and serviceability characteristics. At
the same time, the service environment is identified, first broadly
as aerospace, automotive, business office, storage, or the like,
and then more specifically as a series of defined temperature, humidity,
vibration, shock, or other conditions. The conditions are either
measured, or specified by the customer. From this information, the
designer, usually with the aid of a computer, can model the thermal,
mechanical, electrical, and electrochemical stresses acting on the
Next, stress analysis is combined with knowledge
about the stress response of the chosen materials and structures
to identify where failure might occur (failure sites), what form
it might take (failure modes), and how it might take place (failure
mechanisms). Failure is generally caused by one of the four following
types of stresses: mechanical, electrical, thermal, or chemical,
and it generally results either from the application of a single
overstress, or by the accumulation of damage over time from lower
level stresses. Once the potential failure mechanisms have been
identified, the specific failure mechanism model is employed. The
reliability assessment consists of calculating the time to failure
for each potential failure mechanism, and then, using the principle
that a chain is only as strong as its weakest link, choosing the
dominant failure sites and mechanisms as those resulting in the
least time to failure. The information from this assessment can
be used to determine whether a product will survive for its intended
application life, or it can be used to redesign a product for increased
robustness against the dominant failure mechanisms. The physics-of-failure
approach is also used to qualify design and manufacturing processes
to ensure that the nominal design and manufacturing specifications
meet or exceed reliability targets.
Computer software has been developed by organizations
such as Phillips and the CALCE EPRC at the University of Maryland,
to conduct a physics-of-failure analysis at the component level.
Numerous organizations have PoF software which is used at the circuit
card level. These software tools make design, qualification planing
and reliability assessment, manageable and timely.
Summary and Comments
The physics-of-failure approach has been
used quite successfully for decades in the design of mechanical,
civil and aerospace structures. This approach is almost mandatory
for buildings and bridges, because the sample size is usually one,
affording little opportunity for testing the completed product,
or for reliability growth. Instead, the product must work properly
the first time, even though it often relies on unique materials
and architectures placed in unique environments.
Today, the PoF approach is being demanded by
(1) suppliers to measure how well they are doing and to determine
what kind of reliability assurances they can give to a customer
and (2) by customers to determine that the suppliers know what they
are doing and that they are likely to deliver what is desired. In
addition, PoF is used by both groups to assess and minimize risks.
This knowledge is essential, because the supplier of a product which
fails in the field loses the customer's confidence and often his
repeat business, while the customer who buys a faulty product endangers
his business and possibly the safety of his customers.
In terms of the US military, the U.S. Army
has discovered that the problems with the traditional reliability
prediction techniques are enormous and have canceled the use of
MIL-HDBK-217 in Army specifications. Instead, they have developed
Military Acquisition Handbook-179A which recommends best commercial
practices, including physics-of-failure.
The traditional approach to predicting
reliability is common to various international handbooks.
[MIL-HDBK-217 1991; TR-TSY-000332 1988; HRDS
1995; CNET 1983; SN 29500 1986]; all derived from some predecessor
 F. R. Nash, Estimating
Device Reliability: Assessment of Credibility. Boston, MA: Kluwer,
1993, ch. 6. (back to article)
 L. Phaller, Westinghouse
Electric, private communication, 1991. (back to
 Reliability Prediction
of Electronic Equipment, MIL-HDBK- 217F. Washington, DC: US Gov.
Printing Office, Dec. 1991. (back to article)
 J. Kopanski, D. L. Blackbum,
G. G. Harman, and D. W. Berning, "Assessment of reliability concerns
for wide temperature operation of semiconductor device circuits,"
in Trans. 1st Int. High Temperature Electronics Conf. (Albuquerque,
NM, 1991), pp. 137-142. (back to article)
 M. Pecht, P. Lall, and
E. Hakim, Temperature Dependence on Integrated Circuit Failure Mechanisms:
Advances in Thermal Modeling III, A. Bar-Cohen and A. D. Kraus,
Eds. New York: IEEE and ASME Press, Dec. 1992, ch. 2. (back
(*) Stemming from a perceived
need to place a figure of merit on a system's reliability, US government
procurement agencies sought standardization of requirement specifications
and a prediction process. Without such standardization the military
was concerned that each supplier would develop their own predictions
based on its own data, and it would be difficult to evaluate system
predictions against requirements based on components from different
suppliers or to compare competitive designs for the same component
or system. Thus, even though the values calculated from the models
were unrealistic and often orders of magnitude in error, the view
was that there was commonality. (back to article)
Michael Pecht, George E. Dieter Professor
of Mechanical Engineering and founder and director of the CALCE
Electronic Product and Systems Center of the University of Maryland,
College Park, MD 20742, USA. The Center provides a knowledge and
resource base to support the development of competitive electronic
components, products and systems. The Center is supported by more
than 100 electronic product and systems companies from all sectors,
including telecommunications, computer, avionics, automotive, and
military manufacturers. Mr. Pecht's contact information: tel: +1
(301) 405 5323 - FAX: +1 (301) 314 9269 - e-mail: email@example.com
A previous version of this article was published at Electronics
Cooling magazine, vol. 2, pp. 10-12, January 1996. The author and
the magazine generously assigned its publishing to ERI News.
(back to the top)
for Better Vibration Tests
by Wayne Tustin, Rick Smith and Dan
Customers seeking environmental testing
at commercial test laboratories can expect to be asked a number
of questions. They may think to themselves "Why is the lab asking
me all these questions? They are supposed to be the experts."
The kinds of questions asked by the lab vary,
depending upon circumstances. We have identified three scenarios:
Scenario 1: A project manager visits
the lab, carrying a description of a new widget for which he seeks,
say, a vibration test.
2: A project manager requests a laboratory's quotation so
he can include testing as a line item in a proposal.
3: A purchasing agent seeks a quotation because a purchase
requisition has been submitted to him.
Scenario 1 elicits the most questions
by the lab, and thus occupies more of this article than do Scenarios
2 and 3 (least).
Why do you need this test?
Usually the test lab's first consideration
is understanding why the project manager has concluded that he needs
a vibration test. The project manager usually explains this need
during his first lab contact. Reasons may include:
1) His employer manufactures widgets for the
government or a prime contractor, and testing is required under
the CDRL (Contractual Deliverable Requirements List) and/or other
aspects of the contract. The test may be:
- A "First Article" test. The first widget
submitted under contract with the Government.
- A "Qualification Test". The widget is tested
to determine if it is "qualified" for its intended use.
- An "Acceptance Test" (maybe a "Lot Acceptance
Test"). A certain number of widgets from each production lot must
be tested prior to shipment to the government [or to the prime
2) Same as above, except no government involvement.
Many companies (especially commercial launch vehicle companies)
have adopted testing policies and standards similar to those imposed
by government agencies, in order to maintain a high quality standard
and low failure rate. Their testing requirements are passed on to
their vendors before a supplied "widget" is accepted.
3) The project manager's group is developing
a new line of commercial widgets and wants to establish the new
product's suitability for use in various dynamic environments. This
often involves determining a failure threshold for the new widget,
under various environmental conditions. Sometimes this will (or
should) involve combining vibration with temperature or altitude.
4) Too often, such testing is required retroactively,
after failures have been noted in the field. The project manager
wants to determine the cause or causes.
5) His group may want to perform reliability
tests in order to predict failure rates. Now that his activity knows
the widget is suitable for this application, how long can they expect
it to last? How many out of 100 will fail after 10 hours, 100 hours,
1000 hours, etc.? What are the failure modes (that is, how does
6) His group wants to identify any latent
workmanship defects (bad solder joints, loose connections, poor
welds, etc.) in their new series of widgets. He wants a periodic
production sample screened for these defects to assure that high
production quality standards are maintained. This is often called
ESS or environmental stress screening.
7) His group has developed (or is developing)
a new line of widgets at his company (or at a university research
lab or government facility). Labs available to him don't have the
right equipment or can't achieve the desired test levels.
8) The new widget will experience extreme test
levels, beyond the usual test parameters, and he needs help simulating
new and/or extreme environments.
9) The project may work for a company or an
agency with in-house test capabilities. He has probably already
approached his own lab. Whether to use his own lab or to "outsource"
the test to a commercial laboratory usually depends on price, schedule,
Reasons 1 and 2 (above) are most common. These
are usually pretty straightforward because testing requirements
are usually clearly delineated by the customer's contract. He will
probably bring a copy for review. Not too many questions need be
At Wyle Labs, the sequence of events typically goes as follows:
a) Customer contacts lab for test and is introduced
to the Quotes department
b) Quotes representative (an engineer) discusses
testing requirement briefly and assesses his lab's ability to respond.
He can call upon other engineering staff specialists (within the
lab) for conference/consultation if the request is unusual and/or
c) Customer transmits (letter, FAX, e-mail,
etc.) hard copy request for quote (RFQ). In routine cases this predates
1) and 2) above. [Rick - in d) below, don't you need to add equipment
hours? A big shaker must cost more than a small shaker.]
d) Quotes department processes the RFQ and
determines prices for each test, based on expected level of effort.
This includes the quantity and labor grade (engineer, technician,
machinist, etc.) of man-hours estimated to perform the test as well
as materials that will be required during testing. Pricing normally
includes preparation of procedures, test fixtures, performance of
test(s), and preparation of test reports. Pricing is presented in
the format requested by the customer; many contractual formats are
- Time and Material
- Cost Plus Fixed Fee
- Firm Fixed Price
e) Engineering management reviews the Quotes
representative's estimate to assure that (1) the most appropriate
and efficient technical approaches have been considered, (2) that
the quotation meets the customer requirements and (3) is price competitive.
f) The estimate is forwarded to the lab's
contracts department where it is further reviewed for accuracy.
A quotation letter, presenting pricing by line item, also an estimate
of lead-time required prior to test, also estimated test time, is
g) The quotation letter is finalized, then
forwarded to the customer via mail, e-mail, FAX, FedEx, etc.
h) If the pricing is acceptable to the customer,
the customer issues a Purchase Order (PO). It is normally accepted
by the Wyle contracts office. The work is assigned to a specific
Wyle test engineer who will interface with the customer throughout
the performance of the test program.
Customer may be uncertain
The situation in which the customer project manager's group is developing
a new line of commercial widgets (reason 3 above) and wants to establish
the new product's suitability for use in various dynamic environments,
can be tricky. Considerable "hand holding" may be required. Often
the customer knows he has a vibration problem and wants a test to
determine a solution, but he isn't certain about the best technical
approach. Here the "sine or random?", "how many g's?" type questions
that can be asked of an experienced customer would be futile. When
the Quotes Department representative question "Do you have a particular
test requirement, such as a specification or a procedure on which
you want us to quote?" elicits a "No" answer, he normally introduces
the customer to the appropriate test department manager or a test
engineer, where customer needs are assessed further. Here typical
1) What is the intent of the test? Usually
it is to solve a problem that is anticipated or has already developed.
For example, a new adhesive or laminate material may be proposed
for a composite structure to be installed in an aircraft or launch
vehicle and the customer is concerned about delamination caused
by launch or flight vibration. In this case, Wyle would recommend
he choose the worst case environment anticipated, (worst possible
location and the most severe in-service environment). Then Wyle
would show the customer an existing test specification followed
by others who are supplying components for that particular area
of that particular vehicle. Wyle would suggest developing a test
procedure using an existing specification as a reference. We may
suggest considering other environments, such as acceleration during
takeoff, shock during landing or stage separation, airborne acoustic
noise, temperature and altitude. Specifications exist for each of
these environments aboard many aircraft and launch vehicles. This
general approach is followed no matter what the widget's end use
and/or in-service environment may be.
2) If specifications do not exist for the customer's
application, more questions must be asked. Example #1: the customer
may simply want to know how much vibration his widget will withstand
before it fails. Establishing this failure threshold can be accomplished
by subjecting widgets to the most applicable "in-service" vibration
environment, then slowly increasing the intensity until a failure
is observed. Example #2: a widget to be mounted on a piece of rotating
machinery may be subjected to repeated cycles of sinusoidal vibration
through the min/max RPM of the machine, until failure. Example #3:
a widget that will be attached to a rocket engine may be subjected
to severe random vibration over the frequency range expected.
3) Every customer application should be addressed
independently. Sometimes the in-service environment cannot be anticipated,
but data is still desired. A simple resonance search may provide
the customer with all he really wants to know about the dynamic
response of his widget.
4) In some circumstances the lab may offer
to assist the customer in measuring the service environment, so
that future tests can be realistic.
5) Test cost must be kept in mind. Testing
should accomplish its intent with minimum cost to the customer.
The project manager is working on a proposal to secure a large contract
to produce widgets, and knows testing will be required. He wants
commercial lab pricing to include as a line item in his proposal,
so that the contract, if awarded, covers testing.
Here the lab's response must be fair and be
consistent. The lab is assured that estimated charges will be covered
if the customer wins the contract. The lab should price as competitively
as possible, to keep the customer's overall proposal competitive.
It is tempting to try to "cover the lab" by building in
a margin for unforeseen variables. But if the customer adds to the
lab's margin to cover his unforeseen variables, then burdens his
pricing with another margin, he becomes less competitive and may
lose the contract. That test does not come to the lab.
Another issue is consistency. The lab may be
asked to provide this same pricing to vendors competing for the
same contract. The lab has an obligation to provide consistent pricing
to each vendor for the same effort. The lab's quotation department
should "flag" RFQ's that appear similar or identical.
(back to scenarios)
A purchasing agent wants a test because a purchase requisition has
been submitted to him. Often the agent is unfamiliar with the requirements
in detail. He will have (or can be asked to obtain) the following
information, which is needed by the Quotes department for all three
scenarios. Readers might use this as a checklist when seeking environmental
1) Is testing imminent (customer has contract and/or has test specimens
ready for test) or is the customer quoting a future program and
so needs test pricing as part of his proposal package? What is the
timeline for testing?
2) Does a complete test procedure or specification
exist for this test? Note: When a procedure exists, it is important
to provide it all. If only excerpts are sent and referenced, details
may be missed and later may impact cost. If only a portion can be
transmitted, make sure it includes all details necessary, including
test tolerances and references to other documents.
3) Does the customer want the lab to prepare
a test plan and/or procedure? Is this preparation to be quoted as
a line item deliverable or amortized within the total test cost?
Description of specimen(s) including:
- Type and quantity of specimens that the
customer will provide.
- Drawings of test specimens if available.
If drawings are not available, then
the customer musts at least provide
- Specimen size (basic dimensions, geometric
envelope), cg location and weight
4) Interface information (how many mounting
points, where located, type of fasteners)
5) If a test procedure is not available, then
- Test levels and axes
- Test durations
- Is a fixture available (adaptation required,
or directly compatible with Wyle equipment)?
6) Is this a requote? If so, reference previous
7) Has this test been performed before (when,
where, results)? If at Wyle, reference previous Test Report number.
8) Who is point of contact for pricing information?
Whom to contact with technical questions?
9) When is quotation due?
Questions are essential
Yes, it is fair to assume the test laboratory has experts on its
staff. But they are not clairvoyant. Customers should allocate time
to clearly present the testing requirements as well as the intent
of the test. Questions should flow in both directions. With clear
communication up front, testing disappointments (and disasters)
can be avoided. (back to scenarios)
Rick Smith and Dan Reeder work for Wyle Labs.
To contact them, please send an e-mail either to RSmith@els.wyle.com
Wayne Tustin, ERI's president, can be reached at firstname.lastname@example.org
or at 805/564-1260. Several of these ideas once appeared in Evaluation
(back to the top)