AI Online


Quigley’s Corner – How to perform bad tests

Testing is a significant part of any product development and that is especially true for products that have the potential of harming the user at the worst.  At the least traumatic, we have quality problems that cost our company in both dollars and in customer perception of our company.  A profitable vehicle model can easily fail the strategic objectives of the organization if warranty costs impact revenue stream due to rework, and customer orders drop due to customer trepidation.  Vehicles are complex systems, consisting of hardware and software elements that must reliably and repeatably work within a range of environments.  Vehicles must work in very cold, and very hot weather, moist and dry, bouncing on roads, or on well paved roads.  The manner we set about testing will have a big impact on the success.  Below is an extract from our book, Testing Complex and Embedded Systems, specifically how to do bad testing.  Vehicles do different things, passenger car, mining equipment, construction vehicles and over the road tractors that all will require testing, and though the details of the tests may vary significantly, the philosophy of testing will be fundamentally common.  We want to know the product, before we send it out to the customer.  We are aggressively testing the hypothesis that the product is good and of sound quality.

Figure 1Excerpt from Testing Complex and Embedded Systems, Chapter 8

How to perform bad tests

Bad testing can have a big impact upon the customer’s experience of the product. This adverse customer impact, has an impact upon the profit margins on the product and volumes of the product sold, not too mention the impact of possible legal action. Poor testing can be expensive. Spending many hours and much money on testing and missing product non-conformances or other problem areas is worse yet. Not only is this a waste of resources, it also provides a false sense of security for the organization.

Figure 8.1: Bad Testing

8.1 Do not

8.1.1 Add stress

Testing the product to a very low level of stimulus or minimum rigor does not help learn something from the product.

8.1.2 Go beyond the specified “limits”

Specifications are a good starting point, however, frequently these do not capture the total exposure of stimuli to which a product will be subjected. We have seen many cases, where the product ”passes” testing to specification, and the summarily fails in the field. Variation of key product design areas as well as the variation the external environment can all add up to produce a failure in the field. 

It can take a great amount of time to understand the real demands upon the product when it is deployed by the customer. Gather the information requires instrumentation of the example uses of the product with sufficient sample size and variation to provide some statistical significant understanding, and takes time. Experience suggests development personnel rely heavily upon standards. These are helpful, but there is always a question as to the percentage of the customer population represented by the standard. One of us has experienced the condition where the product passes the electrical transient portion of a specification only to go on to produce failures in the field. This anomaly is not an occasional occurrence, but all too frequent—pointing to testing to specifications as a short-sighted methodology.

8.1.3 Use unusual combinations of events

The real world is more complicated and chaotic than we believe. Testing only to typical combinations is very shortsighted and unrealistic.  We can use Cynefin model to show the sequence to show the sequence of simple to chaotic systems:

  • Simple

  • Complicated

  • Complex

  • Chaotic

8.1.4 Check all inputs and outputs

We once heard a project manager lament, ”if we did the design and development work right, we would not need testing.” This is the same mentality that would lead us to test only select inputs an outputs of the product. Development work takes many people and coordination of events as well as a continuous eye for the details. As long as the development includes humans, there will be a need to test with a measure of rigor. Neglecting some of the inputs or outputs means open areas for failures for your customer to find.

8.2. DO

8.1.5 Follow up

Finding a fault or bug, is only the beginning. Once the bug is reported, a follow up is in order. Will the defect be corrected? In the event the defect is to be corrected, follow up testing is in order. We have seen instances where a fault was reported within the reporting system. The development personnel indicated they understood the fault and generated a new part as a result. The test engineer closed the fault report. The new parts were shipped to the field, where they summarily failed. There was no confirmation (via verification) that the corrective action, corrected the defect. Reporting the defect is not the end of the verification work. It is necessary to make sure the correction, addressed the defect and did not introduce new defects.

8.2 Do

8.2.1 Let the designers create the test plan

We consider allowing the developers to create the test plan to be a conflict of interest. We know of one egregious case where the designers created an air pressure gauge yet had no tests for air pressure in the test plan. The gauges subsequently leaked and caused a major warranty issue.

The test plan should be created by the independent verification and validation team, whether they are internal or external. Certainly, the designers may review the test plan for obvious misunderstanding or misinterpretations, but they are not the authors or final judge of the test plan. We also recommend that the customer be involved in approving the base test plan, although we often leave the customers out of the picture when developing more rigorous product characterization test plans.

8.2.2 Test only for nominal input values

The automotive test standard SAE J1455 says the nominal voltage for a 12-volt motor vehicle is 14.2 volts. For years, one test group we know tested only at 14.2 volts. It took a minor convulsion of management browbeating to get the test team to switch to random voltages and deliberate slewing of the voltage values. A steady 14.2 volts nominal is not a realistic situation in the field and we see no reason to test to this value on the bench.

The same approach applies to any other environmental value that can affect system performance. We conduct vibration tests with varying humidity and temperature to add environmental change to the testing. We are considering ways to add electrical transients to this recipe.

8.2.3 Make test plans that are not consistent with historical evidence

Failure to make plans that are within your means to carry out to achieve an objective, is as bad or worse than not planning. This is another area of testing failure. Creating a schedule that does not account for the performance and the level of expertise of your organization, provides false hope. Schedules should be rational and achievable. Every time we crash the test schedule, we take the risk of seeing tired test engineers make mistakes.

8.2.4 Provide a benevolent, air-conditioned environment

Even in cases where the product will most frequently be used is air-conditioned, this is not the way to test a product to meet a customer’s expectation. It is difficult to quantify the total exposure of the product to the various stimulus the product will experience in the lifetime of the customer’s use.

8.2.5 Forget about extreme situations

We use extreme situations to create failures. Sometimes we can also use extreme environments to cause product failures much more quickly than we would find if we were using nominal environmental limits. Even if we warm the product operator/user to avoid extreme environments, we should still know under what conditions the product will fail.

8.2.6 Ignore the FMEA or fault tree

Some development organizations do not know about FMEA or fault trees and how these tools benefit the development effort. Some organizations use these tools, but use them, after the design is completed and ready to go to production – each provide about the same efficacy. Not knowing about or poor use of these tools provides the same outcome – luck. These tools facilitate critiques of the product allowing for constructive changes to improve the product design

Previous posts

Next posts

Wed. July 17th, 2024

Share this post