Equipment Reliability Institute
ERI News - your reliability newsletter
August, 2001 - vol. 4


Wayne TustinMost of you are north of the Equator, and we hope that you are enjoying a relaxing summer. But now is the time when educators get ready for Fall. This is certainly true at ERI. Elsewhere on this page we are announcing some new courses. Other new courses will be ready soon, so please make a note to check www.equipment-reliability.com and www.vibrationandshock.com in early September.

Several readers have sent interesting pictures and/or video clips that we are converting to PowerPoint™ slides and are incorporating (with appropriate credits to the donors) into our presentations. Thank you one and all.

Several others have proposed themselves as teachers of short courses on some reliability subject. We are working on those ideas.

On request, we have added a number of "hot links" on our Web pages.

Finally, as you will see below, we are presenting you with three articles which we think you will find interesting. They are:

Why the traditional reliability prediction models
do not work - is there an alternative?
- by Michael Pecht

Prepare for Better Vibration Tests -
by Wayne Tustin, Rick Smith and Dan Reeder

How is a laboratory test developed for use in evaluating the
"true life" of an electronic assembly?
- by Harry Schwab

Hey, everyone! Since 1995 my business name has been Equipment Reliability Institute. My basic vibration and shock courses and my consulting are primarily booked through ERI.

Best wishes,
Wayne

*******************************

Why the traditional reliability prediction models do not work - is there an alternative?
by Michael Pecht

Introduction
While it is generally believed that reliability prediction methods should be used to aid product design and product development, the integrity and auditability of the traditional prediction methods have been found to be questionable, in that the models do not predict field failures, cannot be used for comparative purposes, and present misleading trends and relations. This paper presents a historical overview of reliability predictions for electronics, discusses the traditional reliability prediction approaches, and then presents an effective alternative which is becoming widely accepted.

Figure 1 - Physics of Failure Process

What Is The Historical Perspective For Reliability
Prediction of Electronics

Product and system reliability emerged as an identified engineering discipline in the late 1940's. This does not suggest that engineers and designers did not always strive for reliable designs. Engineers have naturally designed and operated equipment to "succeed" and they typically did so by providing a margin of strength over the anticipated loads (stresses). For example, in 1860, A. Wohler presented some of the earliest fatigue failure information, which occurred on stagecoach and railroad axles. The S-N (applied load versus cycles to failure) diagrams, which resulted from Wohler's work, were used to identify the load condition (called a fatigue limit) below which "no failures" should be expected.

Reliability engineering for electronics started with the establishment of the Ad Hoc Group on Reliability of Electronic Equipment on December 7, 1950, and the subsequent Advisory Group on the Reliability of Electronic Equipment (AGREE) formed by the US Department of Defense in 1952. One of the first reliability handbooks was titled "Reliability Factors for Ground Electronic Equipment" published in 1956 by McGraw-Hill under the sponsorship of the Rome Air Development Center (RADC). This publication contained information on design considerations, human engineering, interference reduction, and a section on reliability mathematics. Failure prediction was only mentioned as a topic under development.

Reliability prediction for electronics is traced to November 1956 with publication of the RCA release TR-ll00, titled "Reliability Stress Analysis for Electronic Equipment," which presented models for computing rates of component failures. This was also the first formal publication in which the concept of activation energy and the Arrhenius relationship were used in modeling electronic component failure rates. This publication was followed by the "RADC Reliability Notebook" on October 30, 1959; a report titled "Reliability Applications and Analysis Guide," by D. R. Earles of the Martin Company, in September 1960; and a report titled "Failure Rates," by D. R. Earles and M. F. Eddins, of AVCO Corporation, in April, 1962.

In December 1965, the U. S. Navy introduced the first reliability prediction handbook for electronics, MIL-HDBK-217A. In this first version, there was only a single point failure rate of 0.4 failures per million hours for all monolithic integrated circuits, regardless of the materials, the design, the manufacturing processes or the life cycle condition (environment and usage). This single-valued failure rate was illustrative of the infancy of the reliability models, and the fact that accuracy was less of a concern than standardization to the U. S. Department of Defense (*).

In July 1973, RCA proposed a prediction model for microcircuits, based on previous work by the Boeing Aircraft Company. The proposed model consisted of two additive portions: one reflecting a steady-state-temperature-related failure rate, and the second a mechanical-related failure rate. It was clear to RCA researchers, that any reliability model should reflect design, device and fabrication techniques, manufacturing, materials, and geometries. Unfortunately, this attitude was not shared by the RADC, and the model was greatly simplified in-house by modeling the device reliability with a pair of complexity factors, and assuming an exponential failure distribution during the operational life of the device. This model was then published as MIL-HDBK-217B under the preparing activity of the Air Force. The exponential distribution assumption still remains in many handbooks today, in spite of overwhelming evidence suggesting that it is often not appropriate [1].

Rapid improvements and increased complexity of microelectronic devices pushed the application of MIL-HDBK-217B beyond reason. A good example was the limitations of the early models to address a 64K RAM. In fact, when the RAM model was extrapolated to include at that time the common 64K capability, the resulting mean time between failures was 13 seconds [2]. As a result of this type of incident, on April 9, 1979 MIL-HDBK-217C was published to "band-aid" the problems. Although MIL-HDBK-217C was updated to MIL-HDBK-217D on January 15, 1982, to MIL-HDBK-217E on October 27, 1986, and to MIL-HDBK-217F [3] in December 1991, the handbook could never keep up-to-date with the technology advances; the field data took too long to collect and the models were not based on sound engineering fundamentals.

In the last version of the document, two teams were under contract to provide guidelines for this update. The IIT Research Institute/Honeywell SSED team developed reliability models for CMOS, VHSIC, and VHSIC- like devices, and the University of Maryland (CALCE Electronic Products and Systems Center) /Westinghouse team developed reliability models for advanced technology microelectronic devices to include high gate count devices such as VHSIC, VLSI, and complex packaging approaches such as surface mount, ASIC, and hybrids. Both teams suggested:

  • that the constant failure rate model not be used;
  • that some of the individual wearout failure mechanisms (i.e., electromigration and time-dependent dielectric breakdown) be modeled with a lognormal distribution;
  • that the Arrhenius type formulation of the failure rate in terms of temperature should not be included in the package failure model; and
  • that stresses such as temperature change and humidity be considered. In particular, both the IIT/Honeywell study and the University of Maryland/Westinghouse study noted that temperature cycling was becoming more detrimental to component reliability than the steady-state temperature at which the device is operating, so long as the temperature is below a critical value. This conclusion was further supported by a National Institute of Standards and Technology (NIST) study [4], and an Army Fort Monmouth [5] study, which stated that the influence of steady-state temperature on microelectronic reliability under typical operating changes was being inappropriately modeled by an Arrhenius relationship.

Reliance on MIL-HDBK-217 proved costly. For example, the use of MIL-HDBK-217 upfront in the design process, had initially led to design decisions maximizing the junction temperature in the F22 Advanced Tactical Fighter electronics to 60C and in the Comanche Light Helicopter to 65C. In fact, 125C might have been acceptable and could have resulted in substantial improvements in life cycle cost, weight, volume, support, and reliability. Furthermore, cooling temperatures as low as -40C at the electronic's rails were at one time required to obtain the specified junction temperatures; the resulting temperature cycles are known to precipitate many unique failure mechanisms.

Problems with the Traditional Approach to Reliability Prediction
Problems that arise with the traditional reliability prediction methods and some of the reasons these problems exist are described below .

1) Up-to-date collection of the pertinent reliability data needed for the traditional reliability prediction approaches is a major undertaking, especially when manufacturers make yearly improvements. Most of the data used by the traditional models is out-of-date. For example, the connector models in MIL-HDBK-217 have not been updated for at least 10 years, and were formulated based on data 20 years old.

Nevertheless, reliance on even a single outdated or poorly conceived reliability prediction approach can prove costly for systems design and development. For example, the use of military allocation documents (JIAWG), which utilizes the MIL-HDBK-217 approach upfront in the design process, initially led to design decisions maximizing the junction temperature in the F-22 advanced tactical fighter electronics to 60C and in the Comanche light helicopter to 65C. Boeing noted that, "The System Segment Specification normal cooling requirements were in place due to military electronic packaging reliability allocations and the backup temperature limits to provide stable electronic component performance. The validity of the junction temperature relationship to reliability is constantly in question and under attack as it lacks solid foundational data."

For the Comanche, cooling temperatures as low as -40C at the electronic's rails were at one time required to obtain the specified junction temperatures; even though the resulting temperature cycles were known to precipitate standing water as well as many unique failure mechanisms. Slight changes have been made in these programs when these problems surfaced, but scheduling costs cannot be recovered.

2) In general, equipment removals and part failures are not equal. Often field removed parts are re-tested as operational (called re-test OK, or fault-not-found, or could-not duplicate) and the true cause of "failure" is never determined. As the focus of reliability engineering has been on probabilistic assessment of field data, rather than on failure analysis, it has generally been perceived to be cheaper for a supplier to replace a failed subsystem (such as a circuit card) and ignore how the card failed.

3) Many assembly failures are not component-related but due to an error in socketing, calibration or instrument reading or due to the improper interconnection of components during a higher level assembly process. Today, reliability limiting items are much more likely to be in the system design (such as misapplication of a component, inadequate timing analysis, lack of transient control, stress-margins oversights), than in a manufacturing or design defect in the device.

4) Failure of the component is not always due to a component-intrinsic mechanism but can be caused by: (i) an inadvertent over-stress event after installation; (ii) latent damage during storage, handling or installation after shipment; (iii) improper assembly into a system; or (iv) choice of the wrong component for use in the system by either the installer or designer. Variable stress environments can also make a model inadequate in predicting field failures. For example, one Westinghouse fire control radar has been used in a fighter aircraft, a bomber, and on the top mast of a ship, each with its unique configuration, packaging, reliability and maintenance requirements.

5) Electronics do not fail at a constant rate, as predicted by the models. The models were originally used to characterize device reliability because earlier data was tainted by equipment accidents, repair blunders, inadequate failure reporting, reporting of mixed age equipment, defective records of equipment operating times, mixed operational environmental conditions. The totality of these effects conspired to produce what appeared to be an approximately constant hazard rate. Further, earlier devices had several intrinsic failure mechanisms which manifested themselves as several subpopulations of infant mortality and wear-out failures resulting in a constant failure rate. The above assumptions of constant failure rate do not hold true for present day devices.

6) The reliability prediction models are based upon industry-average values of failure rates, which are neither vendor- nor device-specific. For example, failures may come from defects caused by uncontrolled fabrication methods, some of which were unknown and some of which were simply too expensive to control (i.e. the manufacturer took a yield loss, rather than putting more money to control fabrication). In such cases, the failure was not representative of the field failures upon which the reliability prediction was based.

7) The reliability prediction was based upon an inappropriate statistical model. For example, a failure in a lot of radio-frequency amplifiers was detected at Westinghouse in which the insulation of a wire was rubbed off against the package during thermal cycling. This resulted in an amplifier short. X-ray inspection of the amplifier during failure analysis confirmed this problem. The fact that a pattern failure (as opposed to a random failure) existed under the given conditions, proved that the original MIL-HDBK-217 modeling assumptions were in error, and that either an improvement in design, improved quality, or inspection was required.

8) The traditional reliability prediction approaches can produce what are likely to be highly variable assessments. As one example, the predicted reliability, using different prediction handbooks, for a memory board with 70 64k DRAMS in a "ground benign" environment at 40C, varied from 700 FITS to 4,240,460 FITS. Overly optimistic predictions may prove fatal. Overly pessimistic predictions can increase the cost of a system (e.g., through excessive testing, or a redundancy requirement), or delay or even terminate deployment. Thus, these methods should not be used for preliminary assessments, baselining or initial design tradeoffs.

An Alternative Approach: Physics-of-Failure
Many of the leading US commercial electronics companies have now abandoned the traditional methods of reliability prediction. Instead, they use reliability assessment techniques, that are based on the root-cause analysis of failure mechanism, failure modes and failures causing stresses. This approach, called physics-of-failure, has proven to be effective in the prevention, detection, and correction, of failures associated with design, manufacture and operation of a product.

The physics-of-failure (PoF) approach to electronics products, is founded on the fact that failure mechanisms are governed by fundamental mechanical, electrical, thermal, and chemical processes. By understanding the possible failure mechanisms, potential problems in new and existing technologies can be identified and solved before they occur.

The PoF approach begins within the first stages of design (see Figure 1). A designer defines the product requirements, based on the customer's needs and the supplier's capabilities. These requirements can include the product's functional, physical, testability, maintainability, safety, and serviceability characteristics. At the same time, the service environment is identified, first broadly as aerospace, automotive, business office, storage, or the like, and then more specifically as a series of defined temperature, humidity, vibration, shock, or other conditions. The conditions are either measured, or specified by the customer. From this information, the designer, usually with the aid of a computer, can model the thermal, mechanical, electrical, and electrochemical stresses acting on the product.

Next, stress analysis is combined with knowledge about the stress response of the chosen materials and structures to identify where failure might occur (failure sites), what form it might take (failure modes), and how it might take place (failure mechanisms). Failure is generally caused by one of the four following types of stresses: mechanical, electrical, thermal, or chemical, and it generally results either from the application of a single overstress, or by the accumulation of damage over time from lower level stresses. Once the potential failure mechanisms have been identified, the specific failure mechanism model is employed. The reliability assessment consists of calculating the time to failure for each potential failure mechanism, and then, using the principle that a chain is only as strong as its weakest link, choosing the dominant failure sites and mechanisms as those resulting in the least time to failure. The information from this assessment can be used to determine whether a product will survive for its intended application life, or it can be used to redesign a product for increased robustness against the dominant failure mechanisms. The physics-of-failure approach is also used to qualify design and manufacturing processes to ensure that the nominal design and manufacturing specifications meet or exceed reliability targets.

Computer software has been developed by organizations such as Phillips and the CALCE EPRC at the University of Maryland, to conduct a physics-of-failure analysis at the component level. Numerous organizations have PoF software which is used at the circuit card level. These software tools make design, qualification planing and reliability assessment, manageable and timely.

Summary and Comments
The physics-of-failure approach has been used quite successfully for decades in the design of mechanical, civil and aerospace structures. This approach is almost mandatory for buildings and bridges, because the sample size is usually one, affording little opportunity for testing the completed product, or for reliability growth. Instead, the product must work properly the first time, even though it often relies on unique materials and architectures placed in unique environments.

Today, the PoF approach is being demanded by (1) suppliers to measure how well they are doing and to determine what kind of reliability assurances they can give to a customer and (2) by customers to determine that the suppliers know what they are doing and that they are likely to deliver what is desired. In addition, PoF is used by both groups to assess and minimize risks. This knowledge is essential, because the supplier of a product which fails in the field loses the customer's confidence and often his repeat business, while the customer who buys a faulty product endangers his business and possibly the safety of his customers.

In terms of the US military, the U.S. Army has discovered that the problems with the traditional reliability prediction techniques are enormous and have canceled the use of MIL-HDBK-217 in Army specifications. Instead, they have developed Military Acquisition Handbook-179A which recommends best commercial practices, including physics-of-failure.

References
The traditional approach to predicting reliability is common to various international handbooks.

[MIL-HDBK-217 1991; TR-TSY-000332 1988; HRDS 1995; CNET 1983; SN 29500 1986]; all derived from some predecessor of MIL-HDBK-217.

[1] F. R. Nash, Estimating Device Reliability: Assessment of Credibility. Boston, MA: Kluwer, 1993, ch. 6. (back to article)

[2] L. Phaller, Westinghouse Electric, private communication, 1991. (back to article)

[3] Reliability Prediction of Electronic Equipment, MIL-HDBK- 217F. Washington, DC: US Gov. Printing Office, Dec. 1991. (back to article)

[4] J. Kopanski, D. L. Blackbum, G. G. Harman, and D. W. Berning, "Assessment of reliability concerns for wide temperature operation of semiconductor device circuits," in Trans. 1st Int. High Temperature Electronics Conf. (Albuquerque, NM, 1991), pp. 137-142. (back to article)

[5] M. Pecht, P. Lall, and E. Hakim, Temperature Dependence on Integrated Circuit Failure Mechanisms: Advances in Thermal Modeling III, A. Bar-Cohen and A. D. Kraus, Eds. New York: IEEE and ASME Press, Dec. 1992, ch. 2. (back to article)

(*) Stemming from a perceived need to place a figure of merit on a system's reliability, US government procurement agencies sought standardization of requirement specifications and a prediction process. Without such standardization the military was concerned that each supplier would develop their own predictions based on its own data, and it would be difficult to evaluate system predictions against requirements based on components from different suppliers or to compare competitive designs for the same component or system. Thus, even though the values calculated from the models were unrealistic and often orders of magnitude in error, the view was that there was commonality. (back to article)

Michael Pecht, George E. Dieter Professor of Mechanical Engineering and founder and director of the CALCE Electronic Product and Systems Center of the University of Maryland, College Park, MD 20742, USA. The Center provides a knowledge and resource base to support the development of competitive electronic components, products and systems. The Center is supported by more than 100 electronic product and systems companies from all sectors, including telecommunications, computer, avionics, automotive, and military manufacturers. Mr. Pecht's contact information: tel: +1 (301) 405 5323 - FAX: +1 (301) 314 9269 - e-mail: pecht@eng.umd.edu
A previous version of this article was published at Electronics Cooling magazine, vol. 2, pp. 10-12, January 1996. The author and the magazine generously assigned its publishing to ERI News.

(back to the top)

*******************************

Prepare for Better Vibration Tests
by Wayne Tustin, Rick Smith and Dan Reeder

Introduction
Customers seeking environmental testing at commercial test laboratories can expect to be asked a number of questions. They may think to themselves "Why is the lab asking me all these questions? They are supposed to be the experts."

The kinds of questions asked by the lab vary, depending upon circumstances. We have identified three scenarios:

Scenario 1: A project manager visits the lab, carrying a description of a new widget for which he seeks, say, a vibration test.

Scenario 2: A project manager requests a laboratory's quotation so he can include testing as a line item in a proposal.

Scenario 3: A purchasing agent seeks a quotation because a purchase requisition has been submitted to him.

Scenario 1 elicits the most questions by the lab, and thus occupies more of this article than do Scenarios 2 and 3 (least).

Why do you need this test?
Usually the test lab's first consideration is understanding why the project manager has concluded that he needs a vibration test. The project manager usually explains this need during his first lab contact. Reasons may include:

1) His employer manufactures widgets for the government or a prime contractor, and testing is required under the CDRL (Contractual Deliverable Requirements List) and/or other aspects of the contract. The test may be:

  • A "First Article" test. The first widget submitted under contract with the Government.
  • A "Qualification Test". The widget is tested to determine if it is "qualified" for its intended use.
  • An "Acceptance Test" (maybe a "Lot Acceptance Test"). A certain number of widgets from each production lot must be tested prior to shipment to the government [or to the prime contractor].

2) Same as above, except no government involvement. Many companies (especially commercial launch vehicle companies) have adopted testing policies and standards similar to those imposed by government agencies, in order to maintain a high quality standard and low failure rate. Their testing requirements are passed on to their vendors before a supplied "widget" is accepted.

3) The project manager's group is developing a new line of commercial widgets and wants to establish the new product's suitability for use in various dynamic environments. This often involves determining a failure threshold for the new widget, under various environmental conditions. Sometimes this will (or should) involve combining vibration with temperature or altitude.

4) Too often, such testing is required retroactively, after failures have been noted in the field. The project manager wants to determine the cause or causes.

5) His group may want to perform reliability tests in order to predict failure rates. Now that his activity knows the widget is suitable for this application, how long can they expect it to last? How many out of 100 will fail after 10 hours, 100 hours, 1000 hours, etc.? What are the failure modes (that is, how does it fail)?

6) His group wants to identify any latent workmanship defects (bad solder joints, loose connections, poor welds, etc.) in their new series of widgets. He wants a periodic production sample screened for these defects to assure that high production quality standards are maintained. This is often called ESS or environmental stress screening.

7) His group has developed (or is developing) a new line of widgets at his company (or at a university research lab or government facility). Labs available to him don't have the right equipment or can't achieve the desired test levels.

8) The new widget will experience extreme test levels, beyond the usual test parameters, and he needs help simulating new and/or extreme environments.

9) The project may work for a company or an agency with in-house test capabilities. He has probably already approached his own lab. Whether to use his own lab or to "outsource" the test to a commercial laboratory usually depends on price, schedule, and capability.

Reasons 1 and 2 (above) are most common. These are usually pretty straightforward because testing requirements are usually clearly delineated by the customer's contract. He will probably bring a copy for review. Not too many questions need be asked.

Sequence of events
At Wyle Labs, the sequence of events typically goes as follows:

a) Customer contacts lab for test and is introduced to the Quotes department

b) Quotes representative (an engineer) discusses testing requirement briefly and assesses his lab's ability to respond. He can call upon other engineering staff specialists (within the lab) for conference/consultation if the request is unusual and/or technically challenging.

c) Customer transmits (letter, FAX, e-mail, etc.) hard copy request for quote (RFQ). In routine cases this predates 1) and 2) above. [Rick - in d) below, don't you need to add equipment hours? A big shaker must cost more than a small shaker.]

d) Quotes department processes the RFQ and determines prices for each test, based on expected level of effort. This includes the quantity and labor grade (engineer, technician, machinist, etc.) of man-hours estimated to perform the test as well as materials that will be required during testing. Pricing normally includes preparation of procedures, test fixtures, performance of test(s), and preparation of test reports. Pricing is presented in the format requested by the customer; many contractual formats are available, including:

  • Time and Material
  • Cost Plus Fixed Fee
  • Firm Fixed Price

e) Engineering management reviews the Quotes representative's estimate to assure that (1) the most appropriate and efficient technical approaches have been considered, (2) that the quotation meets the customer requirements and (3) is price competitive.

f) The estimate is forwarded to the lab's contracts department where it is further reviewed for accuracy. A quotation letter, presenting pricing by line item, also an estimate of lead-time required prior to test, also estimated test time, is drafted.

g) The quotation letter is finalized, then forwarded to the customer via mail, e-mail, FAX, FedEx, etc.

h) If the pricing is acceptable to the customer, the customer issues a Purchase Order (PO). It is normally accepted by the Wyle contracts office. The work is assigned to a specific Wyle test engineer who will interface with the customer throughout the performance of the test program.

Customer may be uncertain
The situation in which the customer project manager's group is developing a new line of commercial widgets (reason 3 above) and wants to establish the new product's suitability for use in various dynamic environments, can be tricky. Considerable "hand holding" may be required. Often the customer knows he has a vibration problem and wants a test to determine a solution, but he isn't certain about the best technical approach. Here the "sine or random?", "how many g's?" type questions that can be asked of an experienced customer would be futile. When the Quotes Department representative question "Do you have a particular test requirement, such as a specification or a procedure on which you want us to quote?" elicits a "No" answer, he normally introduces the customer to the appropriate test department manager or a test engineer, where customer needs are assessed further. Here typical questions include:

1) What is the intent of the test? Usually it is to solve a problem that is anticipated or has already developed. For example, a new adhesive or laminate material may be proposed for a composite structure to be installed in an aircraft or launch vehicle and the customer is concerned about delamination caused by launch or flight vibration. In this case, Wyle would recommend he choose the worst case environment anticipated, (worst possible location and the most severe in-service environment). Then Wyle would show the customer an existing test specification followed by others who are supplying components for that particular area of that particular vehicle. Wyle would suggest developing a test procedure using an existing specification as a reference. We may suggest considering other environments, such as acceleration during takeoff, shock during landing or stage separation, airborne acoustic noise, temperature and altitude. Specifications exist for each of these environments aboard many aircraft and launch vehicles. This general approach is followed no matter what the widget's end use and/or in-service environment may be.

2) If specifications do not exist for the customer's application, more questions must be asked. Example #1: the customer may simply want to know how much vibration his widget will withstand before it fails. Establishing this failure threshold can be accomplished by subjecting widgets to the most applicable "in-service" vibration environment, then slowly increasing the intensity until a failure is observed. Example #2: a widget to be mounted on a piece of rotating machinery may be subjected to repeated cycles of sinusoidal vibration through the min/max RPM of the machine, until failure. Example #3: a widget that will be attached to a rocket engine may be subjected to severe random vibration over the frequency range expected.

3) Every customer application should be addressed independently. Sometimes the in-service environment cannot be anticipated, but data is still desired. A simple resonance search may provide the customer with all he really wants to know about the dynamic response of his widget.

4) In some circumstances the lab may offer to assist the customer in measuring the service environment, so that future tests can be realistic.

5) Test cost must be kept in mind. Testing should accomplish its intent with minimum cost to the customer.

Scenario 2
The project manager is working on a proposal to secure a large contract to produce widgets, and knows testing will be required. He wants commercial lab pricing to include as a line item in his proposal, so that the contract, if awarded, covers testing.

Here the lab's response must be fair and be consistent. The lab is assured that estimated charges will be covered if the customer wins the contract. The lab should price as competitively as possible, to keep the customer's overall proposal competitive. It is tempting to try to "cover the lab" by building in a margin for unforeseen variables. But if the customer adds to the lab's margin to cover his unforeseen variables, then burdens his pricing with another margin, he becomes less competitive and may lose the contract. That test does not come to the lab.

Another issue is consistency. The lab may be asked to provide this same pricing to vendors competing for the same contract. The lab has an obligation to provide consistent pricing to each vendor for the same effort. The lab's quotation department should "flag" RFQ's that appear similar or identical.
(back to scenarios)

Scenario 3
A purchasing agent wants a test because a purchase requisition has been submitted to him. Often the agent is unfamiliar with the requirements in detail. He will have (or can be asked to obtain) the following information, which is needed by the Quotes department for all three scenarios. Readers might use this as a checklist when seeking environmental test quotations:

General questions
1) Is testing imminent (customer has contract and/or has test specimens ready for test) or is the customer quoting a future program and so needs test pricing as part of his proposal package? What is the timeline for testing?

2) Does a complete test procedure or specification exist for this test? Note: When a procedure exists, it is important to provide it all. If only excerpts are sent and referenced, details may be missed and later may impact cost. If only a portion can be transmitted, make sure it includes all details necessary, including test tolerances and references to other documents.

3) Does the customer want the lab to prepare a test plan and/or procedure? Is this preparation to be quoted as a line item deliverable or amortized within the total test cost? Description of specimen(s) including:

  • Type and quantity of specimens that the customer will provide.
  • Drawings of test specimens if available. If drawings are not available, then the customer musts at least provide
  • Specimen size (basic dimensions, geometric envelope), cg location and weight

4) Interface information (how many mounting points, where located, type of fasteners)

5) If a test procedure is not available, then document:

  • Test levels and axes
  • Test durations
  • Is a fixture available (adaptation required, or directly compatible with Wyle equipment)?

6) Is this a requote? If so, reference previous quote number.

7) Has this test been performed before (when, where, results)? If at Wyle, reference previous Test Report number.

8) Who is point of contact for pricing information? Whom to contact with technical questions?

9) When is quotation due?

Questions are essential
Yes, it is fair to assume the test laboratory has experts on its staff. But they are not clairvoyant. Customers should allocate time to clearly present the testing requirements as well as the intent of the test. Questions should flow in both directions. With clear communication up front, testing disappointments (and disasters) can be avoided. (back to scenarios)

Rick Smith and Dan Reeder work for Wyle Labs. To contact them, please send an e-mail either to RSmith@els.wyle.com or DReeder@els.wylelabs.com. Wayne Tustin, ERI's president, can be reached at tustin@equipment-reliability.com or at 805/564-1260. Several of these ideas once appeared in Evaluation Engineering magazine.

(back to the top)

*******************************

Questions our readers have asked...

This section of our newsletter was created for you, reader! Feel free to send questions or suggestions to the webmaster. They will be forwarded to one of our specialists, who will prepare a reply.

Here is the question for this issue:
How is a laboratory test developed for use in evaluating the "true life" of an electronic assembly?

There are two different approaches which can be used in developing laboratory tests for evaluating the "true life" of an electronic assembly. The first -- and most accurate -- method requires a definition of the environments to which the electronic assembly will be subjected under normal use. The second is a relative method which is used which is used -- in the absence of "real world" data -- to locate the sensitive regions of the assembly and reinforce them. For both methods a combination of both thermal cycling and random vibration is the optimum procedure.

The first approach can be used if the "real world" environment of the electronic assembly is known. For electronic boxes mounted in automobiles, airplanes, etc., the environment can be measured. The details of data acquisition and reduction are much too extensive to explain in one e-mail message article and would require a significant amount of information before the effort could commence. Suffice it to say that you will wind up with several different random vibration environments for each of three mutually perpendicular axes, along with thermal cycling information. These environments define the required "true life" of the hardware. Based on the critical materials in the assembly, the "real world" vibration levels can then be increased in the laboratory test to decrease the total lifetime of testing to a few hours. Thermal cycles are applied simultaneously with the vibration. As a general rule, the "real world" extreme temperatures are not exceeded for the thermal cycles. As a rule of thumb, a minimum of six thermal cycles should be used, with thermal changes being executed as quickly as possible, and with vibration of each spectrum at each temperature extreme.

The absence of "real world" information puts you into the realm of HALT" (Highly Accelerated Life Testing). For HALT a semi-arbitrary random vibration environment is chosen and run for a semi-arbitrary length of time. This is usually a flat random with a roll-off at the lower end of the frequency range. Also, HALT generally only uses one input axis (although responses can occur in all directions), rather than the three used for "real world" simulation. A decision must be made as to the "worst case" axis -- usually normal to the printed circuit boards in the assembly if they are all parallel. As with the "real world" simulation, vibration is conducted at the thermal temperature extremes (which are chosen using engineering judgment). After completing a series of cycles (perhaps one day of testing) at the initial vibration level, the level is increased (+3 dB is a good arbitrary value) and the test is rerun. The test is continually rerun at ever-increasing vibration levels until the hardware fails. The hardware is then repaired and the testing continued until the next component breaks. This procedure is repeated until engineering judgment tells you that all of the critical elements have been found and are adequately reinforced or redesigned for the production configuration.

Obviously HALT is unrelated to the "real world," but it will give a good indication of the critical elements of your hardware and if the proposed redesigns are effective. Products modified after being subjected to HALT are more durable than original designs. The obvious question is "when do I stop the HALT testing?" Once again -- like most of the decisions with respect to HALT -- this is based on engineering judgment.

"Real world" simulation is a much better method, but it requires an extensive amount of effort to develop the laboratory environments. "Real world" simulation will give an accurate definition of the "true life" of the assembly and tell you if the hardware will survive its intended usage. However, often the "real world" conditions are unknown or the cost of developing them is too expensive, so HALT winds up as a low cost alternative.

Harry Schwab has over thirty years of experience in engineering and consults in the fields of structural vibration and analysis. His experience includes many phases of structural analysis, test and specification development, testing, design, and management. Harry is the lead Structural Analysis Engineer on the JASSM program at Lockheed Martin Integrated Systems in Orlando, Florida. To contact Harry, send an e-mail to schwab@equipment-reliability.com

(back to the top)


Climatics and EMC courses coming up


ERI will provide "Climatics Test" training (October 3-5, 2001) and "EMI, RFI, EMC-Testing and Remediation", (November 5 and 6, 2001), all at Pico Rivera (Los Angeles), California. Please visit ERI's web site in August to get full details and to register.

 
Vibration and Shock courses coming up


Wayne Tustin will teach the following short courses in vibration and shock measurement, analysis, calibration, testing, HALT, ESS and HASS:

Thun, Switzerland,
October 1-3, 2001

Billerica (Boston), Massachusetts,
October 9-11, 2001

Pico Rivera
(Los Angeles), California,
November 7-9, 2001

Santa Barbara, California, February 11-13, 2002

Livonia, Michigan,
April 10-12, 2002

Santa Barbara, California, August 12-14, 2002

 
August Big Sale

Don't miss the August big sale of ERI's Vibration and Shock Distance Learning Program! If you enjoy self-paced training or perhaps you lack travel funds at the moment, this program will fit your needs. It provides you with 31 lessons totally up-to-date plus one-on-one contact with Wayne Tustin, your "remote teacher". Click here to get more information about this promotion.
 
New vibration site


www.vibrationand
shock.com has just been redesigned and updated. If you are looking for basic understanding of vibration and shock theory, measurement, calibration, analysis and/or testing and screening, click here to visit the site.

 
Yahoo.com of the Reliability field


In our last issue we let you know ERI was called the "yahoo.com of the reliability field". Why? Because it provides many "hot links" to the web sites of test labs, test equipment manufacturers, sensor manufacturers, technical societies and magazines, etc. Visit today http://www.equipment-
reliability.com/links.htm
l

 
Announcements


Strain Gages article
Have you any interest in strain gages? Be sure to see "Valid Strain Measurements for Structural Dynamics Testing" in the May/June 2001 issue of Experimental Techniques, journal of SEM, the Society for Experimental Mechanics. The authors are Larry Shull and Chuck Wright, both teachers and consultants from the Equipment Reliability Group (ERG). If you want to read about Larry and Chuck and other members with ERG, visit the Specialists page. If you want a proposal for their training to be tailored for the needs of your company or agency, send an e-mail to tustin@equipment-
reliability.com.


Coming in November!
An article by John Starr, founder of CirVib and supplier of software to predict printed wiring board (and other structures) responses to vibration. ERI will soon announce John's training courses in these areas.

 
Check our Glossary!

Check our Vibration and Shock Glossary. You will find important words and their definitions. This list evolved from Wayne's 50 years of work experience and it is constantly updated.
 
Contact information


ERI - Equipment Reliability Institute
1520 Santa Rosa Av.
Santa Barbara - CA - 93109
Tel/Fax: (805) 564-1260

Wayne Tustin tustin@equipment-
reliability.com

Webmaster webmaster@equipment
- reliability.com

Web sites http://www.equipment-
reliability.com

 
Free Newsletter


Subscribe
If you would like to subscribe to ERI News, go to our web site, fill in the form "Free Newsletter" and hit the Submit button.
Click here to subscribe!

Recommend
If you enjoyed reading ERI News and want to recommend it to a friend, just hit "forward" on the menu of your e-mail program or tell your friend to subscribe to it at our web site.

Format
ERI News is sent in both html and plain text formats. If you had any problems reading this newsletter, please let us know. Send an e-mail to the webmaster, reporting your difficulties.

Previous issues
Missed the previous issues? It is not a problem. Send an e-mail to the webmaster and let us know which issues you would like to receive.

Unsubscribe
If you do not want to receive ERI's quarterly newsletter, please send a reply to this message with "remove" as subject.