January 25, 1999
With any standards or specification project, eventually the discussion turns to "how will we know if an application conforms to our standard or specification?" Thus begins the discussion on conformance testing. Although this type of testing has been done for a long time there is still usually some confusion about what is involved.
There are many types of testing including testing for performance, robustness, behavior, functions and interoperability. Conformance testing may include some of these kinds of tests but it has one fundamental difference. Conformance testing is testing to see if an implementation meets the requirements of a standard or specification. The requirements or criteria for conformance must be specified in the standard or specification, usually in a conformance clause or conformance statement. Some standards have subsequent standards for the test methodology and assertions to be tested. If the criteria or requirements for conformance are not specified there can be no conformance testing.
The general definition for conformance has changed over time and been refined for specific standards. In 1991, ISO/IEC DIS 10641 defined conformance testing as "test to evaluate the adherence or nonadherence of a candidate implementation to a standard." ISO/IEC TR 13233 defined conformance and conformity as "fulfillment by a product, process or service of all relevant specified conformance requirements." In recent years, the term conformity has gained international use and has generally replaced the term conformance in ISO documents.
In 1996 ISO/IEC Guide 2 defined the three major terms used in this field.
"conformity - fulfillment of a product, process or service of specified requirements."
"conformity assessment - any activity concerned with determining directly or indirectly that relevant requirements are fulfilled."
"conformity testing - conformity evaluation by means of testing."
ISO/IEC Guide 2 also mentions that "Typical examples of conformity assessment activities are sampling, testing and inspection; evaluation, verification and assurance of conformity (supplier's declaration, certification); registration, accreditation and approval as well as their combinations."
Conformity assessment is meant to provide the users of conforming products some assurance or confidence that the product behaves as expected, performs functions in a known manner, or has an interface or format that is known. Conformity assessment is NOT a way to judge if one product is better than another. Conformity assessment is a neutral mechanism to judge a product against the criteria of a standard or specification.
2. Conformity Assessment Program
Not all standards or specifications have a conformity assessment program or a testing program. Usually assessment programs are limited to those standards or specifications that are critical for applications to run correctly, for interoperability with other applications, or for security of the systems. The decision to establish a program is based on the risk of nonconformance versus the costs of creating and running a program.
A conformity assessment program usually requires:
The first two requirements in any conformity assessment program are to have a standard or specification and something that defines what conformance is. If there is no conformance clause or test method standard then there is no definition of conformance for that standard or specification.
The next requirement is for some mechanism for doing the testing, a test suite or testing tools. Development of the test suite or testing tools is the costliest part of the conformity assessment program. The costs are dependent on the type of testing that is required (see below).
The other two requirements for a conformity assessment program are the procedures to do the testing and someone to do the testing following the specified procedures. The quality of the test suite or testing tools, the detail of the procedures, and the expertise of the tester, determine the quality, reliability, and repeatability of the test results. The procedures have to be detailed enough to ensure that they can be repeated with no change in test results. They are the documentation of how the testing is done and the directions for the tester to follow. These procedures should also contain information on what must be done when failures occur. Most testing programs strive to obtain impartial and objective results, i.e. to remove subjectivity as much as possible both in the procedures and the testing tools.
3. Types of Testing
A standard or specification may require one or more types of testing. However, the type of testing required has a significant impact on the costs of testing. To illustrate this IEEE Std 2003-1997 defines three types of testing:
Exhaustive testing - "seeks to verify the behavior of every aspect of an element, including all permutations. For example, exhaustive testing of a given user command would require testing the command with no options, with each option, with each pair of options, and so on up to every permutation of options." Exhaustive testing or developing tests for all requirements of a standard or specification can take many staff years and be prohibitively expensive. In some cases it is impossible to test all of the possible test cases in a reasonable amount of time.
"As an example, there are approximately 37 unique error conditions in POSIX.1. The occurrence of one error can (and often does) affect the proper detection of another error. An exhaustive test of the 37 errors would require not just one test per error but one test per possible permutation of errors. Thus, instead of 37 tests, billions of tests would be needed (2 to the 37th power)." Even in a more simple example, if thirteen fields on a page have three possible inputs per field, the number of possible test cases is 1,594,323. Thus the number of test cases for a specification can grow exponentially very quickly.
Thorough testing - "seeks to verify the behavior of every aspect of an element, but does not include all permutations. For example, to perform thorough testing of a given command, the command shall be tested with no options, then with each option individually. Possible combinations of options may also be tested." Usually a test method or conformance clause may specify these boundaries which can be used for thorough testing or suggest a range of possibilities which could be tested.
Identification testing - "seeks to verify some distinguishing characteristic of the element in question. It consists of a cursory examination of the element, invoking it with the minimal command syntax and verifying its minimal function." An example might be to simply determine if any value is in a field, if the field exists, as opposed to testing all of the acceptable values.
4. Factors for Success
For any testing program to be successful it must meet the specific goals of the conformity assessment program. Usually a conformity assessment program must be efficient, effective and repeatable.
To minimize costs and the burden on participants a program must be efficient. Test tools must be optimized to maximize automation and minimize human intervention. Critical areas need to be identified for testing. Other areas which are not critical and don't require testing also need to be identified. It is too expensive to "just test everything." Testing procedures and procedures for processing test results need to be automated where possible.
The testing program must be effective. It must test the critical areas required by the specification or standard to meet the requirements. It must provide the desired level of assurance for its customer base.
To meet international guidelines test results must be repeatable and reproducible. Repeatable results mean that different testers, following the same procedures and test methodology, should be able to get the same results on the same platform. Some testing programs require reproducibility, that different testers, following the same procedures and test methodology, should be able to repeat the results on different platforms.