Office of Operations
21st Century Operations Using 21st Century Technologies

4. An Introduction to Testing

4.1 Overview

This chapter provides a testing tutorial that addresses the basics of testing. It discusses testing methods, planning, test development, resources, and execution. It is not of sufficient depth or breath to make the reader an expert, but it provides an introduction to the concepts and terminology that are likely to be used by system vendors throughout the test program.

4.2 Test Methods

The system specification should include a requirements verification matrix that details the test method(s) to be employed to verify each system requirement and is copied to the system test plan where three columns are added: test level, test responsibility, and test identification number. This matrix is known as a verification cross reference matrix (VCRM) or traceability matrix. The complexity and ultimate cost of the system test program is directly related to the test method(s) specified for verification of each requirement. As the test method becomes more rigorous, performing that testing becomes more time consuming, requires more resources and expertise, and is generally more expensive. However, the risk of accepting a requirement under a less rigorous testing method may be undesirable and may ultimately prove to be more expensive than a more rigorous test method. It is important to consider the tradeoffs and consequences attendant to the specification of the test methods carefully since they flow down with the requirements to the hardware and software specifications and finally to the procurement specification. The following paragraphs define the five basic verification methods. From a verification standpoint, inspection is the least rigorous method, followed by certificate of compliance, analysis, demonstration, then test (formal) as the most rigorous method. A vendor's certificate of compliance may be evidence of a very rigorous development and test program, but that testing is typically not defined, approved, or witnessed by the system's acquiring agency; hence, there is some risk in accepting that certification, however, that risk is often outweighed by the costs of performing equivalent testing by the acquiring agency. Remember that the vendor's test program costs are embedded in the final component pricing.

The following sections will examine what is required for each method.

4.2.1 Inspection

Inspection is the verification by physical and visual examinations of the item, reviewing descriptive documentation, and comparing the appropriate characteristics with all the referenced standards to determine compliance with the requirements. Examples include measuring cabinets sizes, matching paint color samples, observing printed circuit boards to inspect component mounting and construction techniques.

4.2.2 Certificate of Compliance

A Certificate of Compliance is a means of verifying compliance for items that are standard products. Signed certificates from vendors state that the purchased items meet procurement specifications, standards, and other requirements as defined in the purchase order. Records of tests performed to verify specifications are retained by the vendor as evidence that the requirements were met and are made available by the vendor for purchaser review. Examples of "vendor certification" are test results and reports from testing to verify compliance with NEMA TS2-2003 testing standards that were performed by an independent testing laboratory. Other examples include more rigorous testing and certification that may have been performed by an industry accepted testing facility for compliance with such standards as 802.11a for wireless devices. In both of these instances, the certificate represents a test process that was completed at considerable expense to the vendor and is generally only required for a new product or significant product changes.

4.2.3 Analysis

Analysis is the verification by evaluation or simulation using mathematical representations, charts, graphs, circuit diagrams, calculation, or data reduction. This includes analysis of algorithms independent of computer implementation, analytical conclusions drawn from test data, and extension of test-produced data to untested conditions. This is often used to extrapolate past performance (which was accurately measured) to a scaled up deployment. An example of this type of testing would include internal temperature gradients for a dynamic message sign. It is unlikely that the whole sign would be subjected to testing within a chamber, so a smaller sample is tested and the results can be extrapolated to the final product. Along the same vein, trying to determine the temperature rise within a DMS may require analysis based on air flow, fan ratings, vent sizes etc. Other examples include the review of power supply designs to verify that they comply with junction temperature and voltage limitations contained within the procurement specifications.

4.2.4 Demonstration

Demonstration is the functional verification that a specification requirement is met by observing the qualitative results of an operation or exercise performed under specific condition. This includes content and accuracy of displays, comparison of system outputs with independently derived test cases, and system recovery from induced failure conditions.

4.2.5 Test (Formal)

Formal testing is the verification that a specification requirement has been met by measuring, recording, or evaluating qualitative and quantitative data obtained during controlled exercises under all appropriate conditions using real and/or simulated stimulus. This includes verification of system performance, system functionality, and correct data distribution.

4.3 Test Approach

Several levels of testing are necessary to verify the compliance of all of the system requirements. A "building block" approach (see Section 5.2) to testing allows verification of compliance to requirements at the lowest level, building up to the next higher level, and finally full system compliance with minimal re-testing of lower level requirements once the higher level testing is performed.

After components are tested and accepted at a lower level, they are combined and integrated with other items at the next higher level, where interface compatibility, and the added performance and operational functionality at that level, are verified. At the highest level, system integration and verification testing is conducted on the fully integrated system to verify compliance with those requirements that could not be tested at lower levels and to demonstrate the overall operational readiness of the system.

Contractors supplying system hardware, software, installation and integration are typically required (by the agency's procurement specification) to develop and execute verification test plans and procedures. These test plans are required to trace to the contract specification functional and performance requirements and the test plans are approved by the acquiring agency. The contractors' test plans and test procedures are thus an integral part of the overall system test program. A typical system test program has five levels of verification tests. These levels are defined in table 4-1 along with an example of test responsibilities. Note that the test responsibility (i.e., the development and execution of the test procedures) typically rests with the component developer, installer, or integrator, but test approval and test acceptance is an acquiring agency responsibility. That responsibility, however, particularly at the lower test levels, may be delegated to a consultant due to the specific experience and expertise required. Actual test responsibilities will vary from this example based on the project implementation plan, resulting contracts, and the test team staffing plan.

Table 4-1. Test Levels and Example Test Responsibilities
Test Level Test Type1 Test Responsibility
1 Software unit/component tests Software Developer, ITS Consultant/Acquiring Agency
1 Hardware unit/component tests Vendors
2 Software build integration tests ITS Consultant
2 Hardware assembly/factory acceptance tests Vendors, ITS Consultant/Acquiring Agency
3 Software chains and hardware/software integration tests ITS Consultant
3 Control center hardware integration tests Installation Contractors, ITS Consultant
3 Control center demarcation interface tests Service Providers, ITS Consultant
3 Communication/electronic hardware field equipment and field device integration tests – this may also include verification of compliance/conformance to specific standards for the interfaces and protocols (NTCIP) Installation Contractors, ITS Consultant, Standards expert
3 Communication connectivity Installation Contractors, ITS Consultant
4 Subsystem integration and element acceptance tests Acquiring Agency/Operations Agency
5 System integration and system acceptance tests Acquiring Agency/Operations Agency
1While not shown explicitly in this table, testing also includes design and document reviews and audits of internal processes. The test team is assigned the responsibility for audits of internal processes. These audits/tests are tailored to ensure that project records and standards are being met and followed. For instance the test team can track software development folders to verify that they contain records of peer reviews and that the resolution of peer review action items have been addressed.


As part of the system test program (using the example responsibilities as shown in table 4-1), the ITS consultant is tasked to review and approve contractor submitted subsystem test plans, test procedures, and test results for the acquiring agency. The ITS consultant audits this test documentation to assure all lower level requirements verification testing is appropriate and complete, properly executed and witnessed, and formally reported. In addition, the ITS consultant assists in coordinating and scheduling system resources not under the control of the contractors conducting the tests and monitors tests as appropriate for the acquiring agency.

4.4 Test Types

This section discusses each of the test types introduced above and provides some specific guidance and recommendations for the acquiring agency with respect to conducting these tests.

4.4.1 Unit Testing

The hardware component level is the lowest level of hardware testing. Unit tests are typically conducted at the manufacturing plant by the manufacturer. At this level, the hardware design is verified to be consistent with the hardware detailed design document. The manufacturer retains test documents showing compliance with design and manufacturing standards and materials certifications. Due to the counterfeiting of parts that is possible today, material certifications are taking on greater importance. Agencies need to contractually ensure that they have access to vendor's material records to verify the source of materials. The procurement specification should require access to these test documents. The acquiring agency may desire to witness some of this testing, particularly if the hardware component is a new design. If so, the procurement specifications should require agency participation at this level of testing.

The computer software component level is the lowest level of software testing. Stand-alone software unit tests are conducted by the software developer following design walk-throughs and code inspections. At this level, the software design is verified to be consistent with the software detailed design document. Unit-level testing is documented in software development folders. Receiving inspections and functional checkout are performed for COTS software to assure that these components are operational and in compliance with their respective specifications. The acquiring agency may desire to witness some of this testing; however, unless it has experience with software development procedures and practices, software unit test and code inspections will be difficult at best to follow and understand. Test participation at this level is best left to an independent test team with the agency reviewing the audits of development folders and project records. Agency participation in receiving inspections and functional checkout of COTS software can be useful particularly if the agency has not seen a demonstration of that software. For example, if Geographical Information System (GIS) software will be used as a primary operator interface to show geographic features, roadway networks, ITS devices, incidents, and congestion information as well as act as the principle command and control interface, then the agency needs to have an understanding of how this software works and whether it meets the agency's operational needs. If the agency doesn't feel it meets its needs or requirements as documented, then this is the time to select another product. Even if it meets the apparent requirements but is considered unacceptable, it is far better to address the issue sooner rather than later when the consequences are much more severe. As a side note, this type of incremental review and approval should be part of the testing program so that the agency can understand how the requirements have been translated into operational software.

4.4.2 Installation Testing

Installation testing is performed at the installation site subsequent to receiving inspections and functional testing of the delivered components. Here, checklists are used to assure that any site preparation and modifications, including construction, enclosures, utilities, and supporting resources have been completed and are available. Specific emphasis on test coordination and scheduling, particularly for the installation of communications infrastructure and roadside components, is essential to achieving successful installation testing and minimizing the associated cost and should be detailed in procurement specifications. Installation testing is an excellent point in the test program for the agency to begin involving maintenance and, to a lesser extent, operations personnel as test witnesses, observers or participants in the test procedures. The lessons learned by the agency staff during this process will be invaluable when the agency takes over day-to-day operation and maintenance responsibilities.

4.4.3 Hardware Integration Testing

Integrated hardware testing is performed on the hardware components that are integrated into the deliverable hardware configuration items. This testing can be performed at the manufacturing facility (factory acceptance tests) or at the installation site, as dictated by the environmental requirements and test conditions stated in the test procedures. The agency needs to have a presence at factory acceptance tests for new hardware or major hardware components such as a DMS.

4.4.4 Software Build Integration Testing

Software build integration testing is performed on the software components that are combined and integrated into the deliverable computer software configuration items. A software build consisting of multiple items is ideally tested in a separate development environment. This is not always possible due to the expense of duplicate hardware platforms and communications infrastructure. Again, as with software unit testing, test participation at this level is best left to an independent test team with the agency reviewing the test reports.

The agency should give serious consideration to establishing a separate test environment at the TMS facility for future upgrades and new software releases. Every effort should be made to ensure that this test system is representative of the production system and includes simulators or direct connections to the field infrastructure to closely simulate actual system conditions and worst case loading. During the life cycle of the system, especially with incremental deployment, such ongoing testing will be critical. Without a test environment of this type, it is likely that new releases will be accompanied by disruptions in system operation, which may not be acceptable.

4.4.5 Hardware Software Integration Testing

Once the hardware and software configuration items have been integrated into functional chains and subsystems, hardware/software integration testing is performed to exercise and test the hardware/software interfaces and verify the operational functionality in accordance with the specification requirements. Integration testing is performed according to the integration test procedures developed for a specific software release. Testing is typically executed on the operational (production) system unless the development environment is sufficiently robust to support the required interface testing. The agency should make an effort to witness at least some of the hardware/software integration testing especially if performed in the operational environment at your facility. This will be the first opportunity to see specific functionality; e.g., the map software in operation using the large screen display.

4.4.6 Subsystem Acceptance Testing

Acceptance testing at the subsystem and system level is conducted by the acquiring agency to contractually accept that element from the developer, vendor, or contractor. As subsystems are accepted, they may move into an operational demonstration mode, be used for daily operations, or be returned to the developer for further integration with other subsystems depending upon the project structure.

Two activities must be completed in order to commence subsystem acceptance testing. All lower level testing, i.e., unit, installation, and integration testing, should be complete. Additionally, any problems identified at these levels should be corrected and re-tested. Alternatively, the agency may wish to make changes to the requirements and procurement specifications to accept the performance or functionality as built and delivered.

Acceptance testing of the installed software release is performed on the operational system to verify that the requirements for this release have been met in accordance to the system test procedures.

Subsystem test results are recorded and reported via formal system test reports. Formal acceptance is subject to the agency's review and approval of the test report(s) and should be so stated in the procurement specification.

4.4.7 System Acceptance Testing

Acceptance testing at the system level should include an end-to-end or operational readiness test of sufficient duration to verify all operational aspects and functionality under actual operating conditions. While it may not be possible to test all aspects of the required system level functionality in a reasonable period of time, the system test plan should specify which of these requirements must be tested and which should be optionally verified given that those operational circumstances occur during the test period. The acquiring and operating agencies must be ready to accept full operational and maintenance responsibilities (even if some aspects of the operations and maintenance are subcontracted to others). This includes having a trained management, operations and maintenance staff in place prior to the start of the system acceptance testing.

System test results are recorded and reported via a formal system test report. Formal acceptance is subject to the agency's review and approval of the test report and should be so stated in the procurement specification.

The agency may wish to grant conditional acceptance for subsystems that have long-term burn-in periods or specific operational performance requirements in order to allow progress or partial payments to be made, but a sufficient holdback amount (that is relative to the risk being accepted by the acquiring agency), should be retained until all contractual requirements have been met. The amount to be withheld and the conditions for its release must be detailed in the procurement specification. The procurement specification should also include details of any monetary damages or penalties that accrue to the acquiring agency for contractor non-compliance to delivery, installation, and performance requirements.

Subsequent to acceptance testing and formal acceptance, the acquiring agency will own the subsystem or system and will be responsible for its operation and maintenance. The procurement contract should specify what and how (including payment provisions) hardware and software licensing agreements, extended warranties, operations and maintenance agreements, and spares provisioning, are to be handled before conditional or final acceptance is granted by the acquiring agency.

4.4.8 Regression Testing

Additional testing is required whenever new components and/or subsystems are incorporated into the system. These tests are necessary to assure that the added components comply with the procurement, installation, and functional specification requirements and perform as required within the integrated system without degrading the existing capability. A series of regression tests, which are a subset of existing tests, should be performed to re-test the affected functionality and system interface compatibility with the new component. The goal of regression testing is to economically validate that a system modification does not adversely impact the remainder of the system.

The regression tests assure that the "new" system continues to meet the system performance requirements and provides the required added capabilities and functionality. The regression tests are selected from the test set already conducted on the existing system for the affected interfaces and functionality with procedural adjustments made to accommodate the new components. It is important to regress to a level of tests and associated test procedures that include the entire hardware and software interface. In most cases, this will require some interface and integration testing at the unit testing level in addition to functional testing at the subsystem and total system levels.

The agency should participate in regression testing when a large number of components or a new subsystem is being added or for a major software release that is adding significant new capabilities. In these situations, system operations and performance are likely to be impacted.

Regressions tests are also appropriately applied to software modifications. Their purpose is to identify related functionality that might have been adversely affected by the software modification. In complex systems, a software modification can cause unintended consequences in related subsystems.8 The regression tests attempt to identify changes to system functionality and behavior prior to moving the software modification into the production system. Regression testing is also important when there are software "upgrades" to the underlying operating system and COTS products. Again, such upgrades can have far reaching consequences if processor loading or memory usage is affected.

In regression testing, the keys are economics and the definition of the system boundaries. System changes can impact narrow or broad functionality and must be assessed on a case-by-case basis. For the regression testing to be economical, it must be tailored to the potential impacts of the system modification. For example a narrow change could result from increasing the maximum cycle length used by a traffic signal controller. Timing plan operation would be regression tested, as would reports involving the signal timing. In contrast, increasing a communications timeout value would require extensive testing due to its impact on the communications infrastructure. This timeout change has the potential to disrupt communications with every field device in the system and might not impact system operations until communications channels are fully loaded at some point in the future. Tailoring regression testing to the apparent risks is appropriate for managing the life-cycle costs of the system.

4.5 Test Identification and Requirements Traceability

The System Test Plan identifies and describes the tests to be performed at Levels 4 and 5 (subsystem and system respectively). Lower level tests (i.e., Level 1 - hardware and software unit/component, Level 2 - software build integration and hardware assembly, and Level 3 - hardware software integration tests) are typically developed and executed by the contractor/integrator providing the hardware and software subsystems.

The system specification VCRM (see appendix A) identifies specific tests and test methods to accomplish verification of the functional and operational requirements of the system. Each test covers a specific set of related and/or dependent requirements as well as partial requirements and should be designed to focus the attention of the test conductor, witnesses, observers and the operations and support personnel on a limited group of functions and operations, thereby minimizing the necessary test resources. Some requirements may require verification activities in more than one test. The detailed test descriptions delineate the requirements and partial requirements specifically covered by a particular verification test (or test case).

A requirement is fully verified by the accomplishment of all tests identified for it in the VCRM and is subject to the approval of test results submitted to the acquiring agency.

4.6 Test Descriptions

Each test description in the system test plan includes the following:

  • A test objective.
  • Test identification.
  • Traceability to the requirements to be verified by the test.
  • Test type.
  • Level (e.g., demonstration at level 4).
  • Test prerequisites and limitations.
  • Identification of special test software or equipment needed to accomplish the test.
  • Hardware/software test configuration.
  • Test responsibilities (e.g., who performs and witnesses the test).
  • Data requirements.
  • Test scenario.

The test descriptions provide the framework and an outline for the more detailed descriptions that are included in the test procedures. Success criteria are provided on a step-by-step basis in the test procedures.

4.7 Test Requirements and Resources

The test description identifies specific requirements and resources necessary for conducting the test. These include who will conduct the testing and what their responsibilities are prior to, during, and following a test; what the test configuration is; what test equipment will be needed; what the test data requirements are; and what the testing schedule is.

4.7.1 Personnel and Responsibilities

The following paragraphs describe roles and responsibilities of the key system-level test personnel. Depending upon the size of the test program, several roles may be assigned to the same person (small program) or several team members may perform the same role on a dedicated basis (large program).

4.7.1.1. System Test Director

The agency's system test director approves the system test plan, the test procedures developed to accomplish the tests described by the plan, and the test reports provided to formally document the execution of the tests. Subject to a favorable test readiness review, the test director authorizes the use of system resources necessary to conduct the tests as defined by the test descriptions and procedures and has final approval of the test execution schedule.

4.7.1.2. Test Conductor

The test conduct is probably the most important member of the test team. The agency must be comfortable with and trust the individual selected to have the system knowledge and experience level necessary to represent it and fairly conduct the testing. The test conductor directs and oversees the execution of the system tests described by the system test plan. The test conductor conducts the test readiness review and obtains approval from the test director to schedule the test and commit the necessary resources to execute the test. The test conductor provides test briefings and approves any changes to the test configuration and test procedures. The test conductor assigns test witnesses to monitor the execution of the test procedures, record data, and complete checklists. The test conductor may assign limited duties to test observers, e.g., to affirm specific events did or did not occur. The test conductor compiles the test data and prepares the test report for submission to the test director for approval.

4.7.1.3. Test Witnesses

Test witnesses observe, verify, and attest to the execution of each step in their assigned test procedure(s) by completing the checklist(s) provided. Test witnesses also record test data (as required) and comments when appropriate. Test witnesses record the actions and observations reported to them by test observers designated by the test conductor to perform a specific test function. The agency should supply some if not most of the test witnesses. While not totally impartial, as ideal witnesses should be, agency personnel have a stake in assuring the system performs as it is intended to. Contactor personnel have different motivations: wanting to complete the test as quickly as possible and raising the fewest issues.

It is also recommended that there be more than one test witness; often transient behaviors and/or anomalies occur that should be observed but that may be missed by a single observer.

4.7.1.4. Test Observers

Test observers are allowed to observe test activities where their presence does not interfere with the test process. Test observers may also serve a limited role in the test to perform a specific action or to affirm that a specific event did or did not occur. When test observers are requested to perform a specific test activity, they report their actions and observations for the record to a test witness or directly to the test conductor as instructed prior to the test. The agency may wish to use managers as test observers. This affords them the opportunity to become familiar with the test process and take away a first hand knowledge of the details, issues, and time required to implement a TMS.

4.7.1.5. Test Support Personnel

Test support personnel, i.e., the test executors, perform the test steps as directed by the test conductor by operating the test equipment (if any), all system equipment in the test configuration, and the required user interfaces. Test support personnel may be contractor personnel or authorized acquiring or operating agency personnel who have completed qualification training.9

4.7.2 Configuration, Equipment, and Data Requirements

Each test procedure should carefully specify the test configuration,10 test equipment, and data requirements that are necessary for a successful test. The test configuration (i.e., the specific system equipment and/or software under test) must be consistent with and operationally ready to support the functionality of the requirements being verified by the test procedure. Any special test equipment or test software required by the test procedure must have been pre-qualified and (where applicable) have current certifications or calibrations. Data requirements specified in the test procedure, such as database structures, tables, and specific data items, must be in place and complete. For example, in verifying the capability of a dynamic message sign (DMS) to display a message on command, test messages must have been generated and stored and be accessible to the DMS command and control software.

It is important that the test "environment" be reviewed and validated (certified) prior to the start of any testing. This is to ensure that the test environment can and will measure the anomalies and normal operation and that failures of the system under test will be apparent.

4.7.3 Schedule

Test scheduling and coordination may be the most important aspect of testing. It involves a specific knowledge of what components are ready to be tested, how they are to tested and in what environment, what resources (time, material, and personnel) are needed for the testing, and what impacts to other operations can be expected. Once a test has been scheduled, a specific set of resources is committed to be available for a specific period of time. The system test plan should provide an overall test schedule indicating which tests should be performed and in what order. It should also specify the calendar time frame and expected test durations. Actual test dates will be set and coordinated by the test director following a test readiness review.

4.8 Test Execution

The acquiring agency must formally approve each verification test and the associated technical test procedures prior to test execution. Test pre-planning for the specific test should commence at least 15 days11 prior to the planned test start date. During this pre-planning period, the test conductor conducts a test readiness review. The readiness review evaluates the test description, current system configuration, test equipment, and training to determine if the required test conditions can be met. This review includes system problem/change requests (SPCR) and other configuration management data that may be applicable. The test conductor makes a determination of which test procedures (if any) must be modified to accommodate differences in the configuration as specified in the test description and the probable test configuration at the time of the test. Test procedures requiring modifications are redlined as appropriate and submitted for approval with the other pertinent test readiness review data. A summary of that readiness review, including any configuration differences, redlined procedures, and a formal test schedule and resources request, is submitted to the acquiring agency for approval. That approval sets the formal test start date and commits system resources and personnel to support the test.

It is recommended that this readiness review be taken seriously by all parties concerned. Testing is expensive for both the agency and the contractor; all too often the contractor "gambles" that they will be ready on a specific date, yet when the consultant and agency personnel arrive, the test environment may be flawed (incapable of performing some of the tests) and the contractor may never have actually attempted to perform the test. This can be a blueprint for disaster and a failed test. Such failures delay the project and inconvenience everyone and start the testing program on a sour note.

Prior to the execution of each test, the test conductor provides a test briefing (orally and/or in writing, as appropriate) to all test participants, including test witnesses, observers, and operations and support personnel. The briefing reviews the test description, including the requirement(s) to be verified, required test configuration, expected duration, and the test responsibilities of each participant. Test procedure checklists are provided to the test witnesses designated to perform a test recording function. Any test limitations, special circumstances, or test configuration and procedure changes are noted at this time. Unless specifically precluded in the test procedures or at the direction of the test conductor, once initiated, the test should be executed to planned completion, even if some individual steps cannot be completed (or fail). A decision to rerun the entire test, or a partial test to complete those steps skipped, will be made after the test conductor terminates the test. Test completion is determined from a review of the test report.

During the execution of the test, designated test witnesses record data and comments as appropriate, and complete each step of the test procedure(s) on the procedure checklists provided. Other data collected during the test (not recorded on the checklists) is identified in the checklist and marked with the test identification number, date, and time collected. Completed checklists and data collected are submitted to the test conductor for inclusion in the test report.

Any conflicts that occur during the execution of a test should be resolved first by the test conductor and then, if necessary, elevated to the test director. Potential sources of conflict include impacts to ongoing operations or other testing; whether or not to allow a series of test steps to be repeated following a failure, if that failure can be attributed to missing a test step, incorrectly following a test step, or executing test steps out of order; and terminating a test in progress due to the failure of test equipment, significant change in test conditions or availability of test resources, or too many test steps having unsuccessful results. The test director, acting on behalf of the agency, is the final authority for resolving testing conflicts.

The test conductor should convene a test debriefing as soon as possible following the termination of the test. Test witnesses and observers are polled for their comments on test conditions, problems, and general observations. A preliminary test completion status is determined from this debriefing to allow for planning of future testing. The agency should have a presence at the test debriefing to get a heads-up on any problems discovered and potential schedule impacts.

If the testing will span multiple days, it is suggested that a test de-briefing be conducted at the conclusion of each day to review notes and identify potential problems and anomalies.

4.8.1 Test Procedures

Detailed technical test procedures are prepared for each system test (see Appendix B for a sample test procedure). A single test may involve multiple procedures, each of which includes a test procedure checklist. This checklist contains the test identification number, enumerated test steps with the expected outcome or response (success criteria) for each step, space (as appropriate) for comments or recording test data, and a place to initial the completion of the step. The original checklist with initials becomes a part of the formal test report. The agency has the responsibility to review and approve system test and acceptance procedures.

It may be convenient to construct an on-going test report using a three-ring binder to which various supporting documents can be added. Examples include strip chart records, calibration charts, schematics, and witness notes. Each page of the test procedure with the observed results and witness initials is also included in the binder as the test is completed. It is also recommended that the agency take pictures of the test environment and results to facilitate later reconstruction of reports. Digital photos are inexpensive to take and the record could be invaluable for later analysis.

4.8.2 Test Tools

Test software, test equipment, hardware/software simulators, data generators (if any), etc. must be pre-qualified,12 have current certifications or calibrations, and be approved for a specific intended application before use during verification testing. Test procedures must refer to specific test software, if and where applicable, and must include the necessary steps to load, initialize, and operate the test software. To insure the repeatability of the test conditions, the test software and operational data including specific "scripts" to be used during verification testing must be under configuration management control prior to test start.

4.8.3 Problems Occurring During Testing

System problem/change requests (SPCR) are written for hardware and/or software that malfunction or fail during the test. Where possible or necessary to continue a test, and at the direction of the test conductor, maintenance support personnel should attempt to restore hardware functionality. At the direction of the test conductor, the software developer or the agency's ITS consultant makes a determination whether or not to attempt a retry of the software function. No software corrections may be made during the test. Data problems (i.e., initial values, thresholds, limits, channel assignments, etc.) may be corrected if necessary to continue the test and meet the test objectives, only if and where provisions have been made to add, edit, or update these data using a standard user interface or data entry screen. Otherwise, the problem should remain uncorrected for the remainder of the test. Data problems must be noted for CM review. No software changes, data or code, should be made during the test using software-debugging tools.

SPCRs written during the test are listed in the formal test report and copies provided as supplemental reports.

4.8.4 Test Reports

The test conductor is responsible for the preparation and submittal of test reports to the acquiring agency for approval. Test reports should be submitted within 15 days of the test execution. A determination of test completion is made by the acquiring agency from a review of the test report.

Formal test reports are prepared for each functional element or system test. As a minimum, the test report must:

  • Identify the test being reported by test title and unique test identification number.
  • Provide a summary of test activities, including date and time of the test, test witnesses and observers present.
  • Provide a brief discussion of any exceptions or anomalies noted during the test.
  • Include the original copy of the completed test procedure checklists, and data collected and recorded during the test.

SPCRs written during the test and any supporting analyses conducted on test data collected and recorded should be documented separately and referenced in the test report. Copies of the SPCRs and a copy of each supporting analysis should be provided with the test report.

4.9 Summary

This chapter presented a testing tutorial that included many of the testing concepts and the terminology that you are likely to encounter in your testing program. The five basic verification methods (i.e., inspection, certificate of compliance, analysis, demonstration, and formal test) were defined and applications explained. A multi-level, building block testing approach was introduced that delineated the types of testing that will be conducted and what organization(s) have the primary test responsibility for which tests at each level. Next, each test type was described and linked to its test level. Then the basic elements of the test procedures including test identification and requirements traceability, test descriptions, test requirements, and resources were presented. Finally, what and who is involved in the execution of a test procedure was discussed along with the content of the final test report.


8 Note that due to the complexity of today's systems, the change may have unintended consequences in apparently unrelated portions of the software or system functionality. This can be caused by such problems as processor loading, event sequencing, and data/table overflows that are not apparent. Therefore, regression testing should be broader when the potential for such problems exists.

9 It is assumed that contractor personnel will have the requisite knowledge and experience to operate the test equipment and system equipment, and follow the test procedures. If agency operations personnel are used, they should have specific operations training on the system functions being tested, otherwise the test steps must be extremely detailed. They also need an understanding of what must be done and the rules of engagement, such as who can touch the equipment, what to record, what to observe, and who to report to.

10 This should typically include a complete wiring diagram and perhaps a physical diagram where external devices or test stimuli are necessary. Such test configurations must be sufficiently documented that they show all normal connections and potential unintended connections.

11 This is under ideal conditions. However, when such time frames must be abbreviated due to project schedules, equipment and personnel availability, etc. it is important that the test director and the agency remain comfortable with the test plan. The 15 days is generally necessary to ensure that everyone understands the test and that all resources are available to ensure that the test can be run without interruption.

12 This must include certification that the test tool or instrumentation can and will capture the specific measurement and that anomalies or failures will be detected or become visible during the test.

13 Note that software debugging tools tend to perturb the operation of the software by altering the timing and machine utilization. As a result, the results may not be representative of normal system operation. Often times, the activation of software debugging tools will have the effect of masking or hiding the problem where timing is suspected.

Previous | Next
Office of Operations