Office of Operations
21st Century Operations Using 21st Century Technologies

2. Testing: A System Engineering Life Cycle Process

2.1 Overview

This chapter discusses testing within the system life cycle and the system engineering process. The focus is on the information needed to produce a testing program that fits the needs of the acquiring agency and the complexity of the system.

The system engineering process describes the application of technology and management practices to building complex systems of equipment, software, people, and procedures to satisfy specific needs. The process is tailored to the needs being satisfied and the complexity of the envisioned system. In some cases, the needs may be simple, well defined, and satisfied by modifying an existing system. At the other end of the spectrum are cases where the needs are extensive, broad, and diverse, resulting in a new, complex system that may have unanticipated impacts and relationships. The system engineering process provides a process framework to address problems on both ends of the spectrum.

As the quantity of software-intensive systems has exploded over the past 60 years, many excellent references for the system engineering process have been written. Information on system engineering can be obtained from resources like the FHWA's Building Quality Intelligent Transportation Systems Through Systems Engineering and from other resource organizations listed in the last chapter of this guide. This document focuses on the testing process and its role within the systems engineering process.

The reader is advised that not all of the items described here will be applicable to every project, as the size and complexity of the project will determine what is applicable. In addition, unless the acquiring agency has extensive technical expertise and experience in systems engineering, it should anticipate utilizing outside expertise. This should not dissuade the agency from augmenting its staff or developing these skills in-house, but one should not begin a complex TMS project without expert assistance and an in-depth understanding of what is involved in the system engineering life-cycle process.

2.2 The System Life Cycle

The systems engineering process (SEP) is a methodology and tool for managing a system's life cycle starting with concepts and ending with the system's retirement. It is a highly structured method to facilitate the development, maintenance, refinement, and retirement of dynamic, large-scale systems consisting of both technical components (Figure 2-1 is the Systems Engineering "V" Model for ITS that details the various stages that occur within the system's life cycle.

Diagram of the systems engineering 'V,' which details the various stages that occur within the system's life cycle.
Figure 2-1. Systems Engineering "V" Model

While testing is shown as one stage of the life cycle, it is important to understand that testing is also a continuous process within the life cycle. Testing begins with writing the requirements; each requirement must be written in a manner that allows it to be tested. During the design stages, testing will be a consideration as design trade-offs are evaluated for their ability to satisfy the requirements. New requirements may emerge from the designs as choices are made to satisfy the requirements within the project's constraints. Hardware components, software components, subsystems, and systems will be verified during the implementation and testing stages. Final system-level tests will be performed to accept the system and demonstrate the system's readiness for production service. However, testing activities will not end once the system is in operation; it will continue as the operations and maintenance staff perform corrective, adaptive, and other system maintenance activities.

The following sections will discuss these process activities in more detail with respect to the input information, selected processes, and the results from the testing process while verifying the transportation management system.

2.3 Test Planning

This section provides an overview of test planning starting with the information that goes into developing the test program. It then summarizes testing activities within the system engineering process and describes the products produced by these activities. Emphasis is placed on the system requirements that will serve as the baseline for the development of the overall test plan for the completed system.

2.3.1 Inputs

Testing starts with well-written requirements. From the testing perspective, the system specification document must be written with "testable" requirements. Without testable requirements, the testing budget cannot be used efficiently to ensure that the system performs according to its design. The testing process verifies that each requirement has been met; if it is not testable" there are no grounds for acceptance of the work. Table 2-1 lists various documents that will contain requirements for the system.

Table 2-1. Requirements Sources
Documents that Will Contain Requirements for the System
  • Contract Documents
  • Concept of Operations
  • System Definition
  • System Design Report (SDR)
  • Hardware Requirements Specification (HRS)
  • Software Requirements Specification (SRS)
  • Interface Control Documents (ICD)
  • Product Data Sheets

It is important to realize the requirements will be repeated in the various documents as higher level requirements are carried down into more detailed design documents.

Once the requirements have been documented and approved (i.e., accepted by the project stakeholders and the project management), the process of requirements analysis can begin. In this phase, the requirements are analyzed and decomposed by type (e.g., operational, functional, interface, physical, electrical, etc.) and then allocated to hardware or software subsystems for implementation.

Operational requirements typically define and describe how the system operates and what features are visible to system operators and other users. These requirements are usually derived directly from the concept of operation document and are at a high level.

Functional requirements describe and specify the underlying functionality behind the system's operational features, specific capabilities to be provided by designated subsystems, user interactions, equipment types and quantities to be interfaced, and other system characteristics. These are the critical functional requirements. Less critical, lower level functional requirements will be identified and derived during the hardware and software design phase. Note that it is important that performance requirements (for the system) be clearly identified - if they matter. For example, the screen size, refresh rates, and screen-to-screen navigation times may be critical for a graphical user interface. These must be quantified to be testable; simple statements such as "easy to use" or "refresh quickly" provide little if any measurable guidance for either the designer or the test developer.

Interface requirements detail the data exchange between the hardware and software subsystems and with other external systems. Here the data types, formats, message structure, transfer methods, and protocols are defined to the level of detail necessary to begin hardware and software design. Interface requirements between this system and other external systems are captured in interface control documents (ICD), which are agreements between two or more parties that detail the physical, functional, and electrical interface requirements of the data exchanges and the data handling responsibilities of each party to the agreement.

Once again, it is important to include performance requirements (data rates, connect time, retry times, etc.) in the interface requirements document. It is also important to identify the requirements for "anomalies"; i.e., how to handle problems such as lost packets, poor throughput, intermittent connections, and corrupted data.

As the system engineering process moves forward, new requirements will emerge. They may come from the resolution of development issues, or just be a great idea that came to mind while personnel discuss the merits of the system's operation. Regardless of how they are identified, the important point is to have a process in place to recognize the new requirements formally and roll them into the existing requirements. This process is incorporated into the configuration management plan that defines the approved process for the development, review, and approval of changes to the system and any new or modified functional requirements to be implemented. The configuration management plan also defines how those changes are documented, tested, and accepted.

It is important to recognize that new requirements will require additions to the test procedures and may affect the test program already developed. These are the natural consequence of change; it is important that such changes be managed and that impacts on the testing program be assessed when considering the change because the cost of the change may affect many other aspects of the project. (Note that change is inevitable; one probably does not want to prevent changes, but a plan to manage change as it occurs is important.)

2.3.2 Test Plan Preparation

Test planning is a key element in the acquisition and deployment of a TMS. The acquiring agency must become involved in this aspect of the test program to ensure that the testing program truly reflects the agency's understanding of the requirements. Where the contractor or vendor is assigned the responsibility and authority to develop the test plan, the test plan and test procedures are often carefully crafted so that few tests fail. The system or device under test is often not stressed or tested at the boundaries (max and min conditions), to determine what breaks so it can be fixed. This does not mean that the agency must develop the test plan itself, but it does mean that the agency must carefully review the test plan and test procedures prior to approval to ensure that meaningful testing is performed. The agency should seek professional assistance in this area if it is not comfortable with the risk of performing this task in-house. After all, the agency wants to be assured that the system will operate under all conditions, not just ideal conditions, and that the testing verifies all requirements (where practical), not just the vendor-selected requirements.

The system specification developed during the requirements definition phase establishes the requirements baseline for the development, implementation, qualification testing, and acceptance of the system. The system test plan defines and documents a test program that assures all of the elements of the system are subjected to a structured review process specifically directed at verifying compliance with the physical, functional, and operational requirements at each level of system development and deployment.

Successful testing at one level is a prerequisite for testing at the next higher level so that the determination of compliance is cumulative and leads to successful demonstration of system compliance. The system test plan covers all levels of testing for both hardware and software and defines the test responsibilities of both the providers and installers of the system.

2.3.3 Testing Activities

Testing activities are tasks that are specific to the project's current life cycle stage and processes that span the entire life cycle. For example, unit testing activities occur during the implementation stage while management of the Verification Cross Reference Matrix (a.k.a. Traceability Matrix) spans many stages of the life cycle and would occur in parallel to unit testing. This section highlights the many testing activities that occur during each system life-cycle stage and those that span stages. Requirements

The testing staff should become involved during the requirements development phase of the project. The testing staff's primary goal during the requirements development is to ensure that each requirement can be tested. The verification cross reference matrix (VCRM) is written during this stage and it becomes a key document for the project to be maintained under configuration management along with the requirements documents. Design

Design reviews are another area requiring a significant participation of the testing staff. As the design progresses, the test team tracks the allocation of requirements to the configuration items presented in the high-level design documents. A preliminary design is generated to define the architecture of the system and provides for a high-level distribution of the requirements to the system components. A detailed design is then prepared to provide a complete description of the hardware and software design by detailing the functions, behavior and interfaces for all hardware components and computer software components within their respective configuration items. Each hardware and computer software component defined is traceable back to the requirements allocated to a specific configuration item from the respective hardware requirements specification or software requirements specification. The detailed design documents describe the hardware and software designs at a level suitable for the start of prototyping and unit testing.

While it is not necessary for the acquiring agency to attend all of these intermediate reviews, they do afford the agency an opportunity to become familiar with the system architecture, components, and interfaces being developed and they occur so that the agency can redirect the design if necessary. Agency participation also sets the stage for understanding the testability of the design and how the test procedures will verify that the implementation will meet the requirements.

A critical design review (CDR) is scheduled when the detailed design documents are completed. The review is typically conducted by the hardware or software development team leader and should include program management, systems engineering, integration and test, and operations staff representatives. Due to its critical nature, this review should be attended by the acquiring agency, and agency attendance should be codified as a contractual requirement. The CDR is the last opportunity to review the hardware and software designs and identify deficiencies before they become errors and omissions that present themselves in subsequent testing phases. It cannot be stressed enough that identifying and resolving the deficiencies at this time is economically vital to the successful financial management of the project.

It is important that the agency employ the resources necessary to feel confident during the CDR and be prepared to "sign-off" on the results. The agency must be able to relate the detailed design to the requirements; the agency needs to recognize that the frame of reference of the designer/implementer (often the programmer) is very different, and "assumptions" which may seem obvious to the agency must be concisely documented to avoid omission or misinterpretation by the designer/implementer. This is another instance where the development of the test procedure should be stressed early in the project. The test procedures will be based on the requirements and represent the agency's view of the final expected operation.

From the test program perspective, the CDR and approval of the detailed design represents a significant milestone. From this point forward in the project, testing will be performed on actual hardware and software. It is also important to be able to trace all of the requirements to an element of the design; this is an opportunity for both the developer and the agency to ensure that all requirements have been included in the detailed design and that the test procedures are being developed to verify the original requirements. Implementation and Unit Testing

During the implementation phase of the project, many hardware and software components are built and the unit-testing process begins. The acquiring agency must decide the appropriate level of unit testing to fund (i.e., the costs for participation of agency personnel or experts hired by the agency, travel expenses, equipment, etc.). The agency may leave the unit-testing program to the vendor, may require reviews of the vendor's unit testing documentation, or may decide to participate in the unit testing. These choices are discussed in detail in later chapters but, for now, the emphasis is on the start of the test program.

The steps to implementing the hardware design are relatively straight forward; there may be a prototype development, which is evaluated and tested based on the design and construction requirements identified in the procurement specifications. This phase is generally followed by the development of production units, which will be the devices used by the project.

Fabrication of the productions units is next. This activity should be monitored or tested to be sure it is consistent with the design and construction standards in the procurement specifications. Once the production units are completed, the agency generally conducts detailed unit testing of the device(s). Embedded firmware (i.e., software that is an integral part of the hardware component and typically contained on a computer chip within the device) that represents a new or modified design should be treated like software and subjected to the same rigorous development and test program as new or modified software. Hardware components are combined and integrated into deliverable hardware configuration items that are defined in the detailed design.

Creation of the software source code also begins with the acceptance of the CDR. This can be an iterative process as the design matures, particularly where performance issues are key elements of the design and where prototyping of graphical user interface (GUI) screens is necessary. Coding, including code modifications to existing baseline source code, are accomplished in accordance with the software development plan and software design and coding style standards specified in the procurement specifications for the target computer platform(s). Software components are combined and integrated into deliverable computer software configuration items that are defined in the detailed design.

Build is the process of combining code units and data structures into executable components for the purposes of testing their interfaces with each other and with system resources. This is a necessary step in developing an operational version of the software system. A software build typically consists of multiple computer software configuration items and is loaded and tested together on the development computer system.

Subsequent to successful unit testing and integration in the manufacturing environment, factory acceptance testing can begin for those components that are not standard products, on a qualified products list, or require a unique test environment that cannot be easily achieved at the ultimate installation site (e.g., dynamic message signs). Components delivered to the site for installation should be subjected to receiving inspections and functional testing as detailed in the procurement specification. Conditional acceptance (and partial payment) can be given with respect to delivered quantities, but final acceptance (and some payment) should be reserved until subsystem operational testing, including any required burn-in periods, have been completed. It is important to remember that there may be considerable embedded software in most of today's ITS devices; as such, testing both at the factory and on-site should be extensive and should always test the complete product. There have been situations where vendors have delivered complete devices with "diagnostic" software that could verify proper hardware operation, but the final device functionality was not included. Hence, the agency paid for a product that did not meet their specifications because many of the required operations had not been completed. This can occur when a vendor is seeking some cash flow for large scale devices that may take months to be installed. How the agency deals with this situation should be negotiated and managed with the vendor, but they agency should recognize that it is at risk because the delivered product may not meet the contractual requirements. If the vendor should "disappear" before system completion, the agency may be in a position where it is difficult to have the product "finished" and made operational.

Following successful integration in the development environment, a delivery release version of the software system is generated for installation in the operational (production) environment. Detailed installation procedures should ensure that the new software version can be installed in the operational environment and replace the previous version with a minimum disruption to ongoing operations.

Integration typically refers to bringing together the hardware and software subsystems to achieve full system functionality. The system hardware and computer software configuration items are integrated into fully functional subsystems, which provides the first opportunity to exercise and test hardware/software interfaces and verify operational functionality in accordance with the specifications. Integration Testing

The system test plan defines the test methodology for both the hardware and software systems comprising the TMS. It describes the testing approach and the levels of testing that are necessary to verify compliance with all the system specification requirements. This testing will be based on the methods of verification that are included in the system specification's requirements verification cross reference matrix. The top-level descriptions of the system test procedures in this plan typically assume that all applicable requirements are implemented prior to start of system level tests. However, most systems are deployed incrementally; hence, successive versions of the system test plan will be necessary, each describing the current deployment's testing requirements. The system test procedures that are developed must be tailored to the specific system configuration planned for each stage of deployment. As a result, the system test procedures that follow from the test plan, the test descriptions, test procedures, and test steps will reflect the verification process for only those requirements and partial requirements actually implemented at the time of the test. For subsequent deployments, the test procedures will need to be modified to incorporate the expanded requirement sets included in those deployments.

Acceptance testing completes the formal testing process. The acceptance test is the responsibility of the agency and is the last opportunity to make sure that the system's equipment and software meets the agency's performance and operational needs and is in full compliance with all the requirements identified throughout the project. From the vendor's standpoint, satisfactory completion of the acceptance test and approval by the agency means that the vendor has completed its contractual obligation to the agency and is due final payment.1 Once the acceptance test has been completed, the agency "owns"2 the system and is responsible for its operation and maintenance. Typically, it also signifies the start of the warranty period and the transfer to the agency of the hardware and software support contracts and licenses maintained by the vendor during the system development and implementation phases.

For a large, complex TMS that will be implemented incrementally over time, acceptance testing will most likely involve a number of vendors and the agency. The agency should consider conducting a number of lower level acceptance tests to accept individual components or subsystems and allow vendors to receive progress payments or partial payment at delivery, following site installation, or following initial subsystem operation. This strategy affords the agency the opportunity to start some operations early and gain operational experience while other elements of the system are being developed or deployed. Operations

There are two aspects to systems operations that are critical to the long-term success of any complex system: 1) problem reporting and 2) configuration management or change control.

Any anomalous system behavior or suspected problems with system hardware or software functionality should be recorded3 on a System Problem/Change Request (SPCR) form (see Appendix C for an example) and reported promptly by operations and maintenance personnel. Problems of a high operational impact should be reported to the operations shift supervisor or TMC manager immediately. It is important that as much information about the suspected problem and conditions at the time be recorded as soon as possible during or following the event. Problem investigation may require repeating an existing test procedure, modifying one, or creating a new one to gain enough information to isolate and resolve the problem. Problem resolution will almost always require a change to the current baseline system, possibly affecting operational procedures, maintenance actions, training, hardware and/or software, and/or system requirements. Problem clearance will require testing to verify that the problem has been resolved. If requirements change so must the test procedures to verify that the new requirements have been met. Finally, regression testing (see Section 4.4.8) must be performed to ensure that no additional problems have been introduced.

The SPCR provides a control point for managing corrections and changes to the system. Only approved SPCRs should be implemented, otherwise chaos will soon follow due to loss of change management and configuration control. It is important that problems be resolved in a controlled and orderly manner under the direction of the configuration control board (CCB) to ensure that the appropriate corrective actions are identified and followed so that problem resolution is both timely and cost effective. Because there is almost always a cost and schedule impact to all but the most routine problem resolution and maintenance activities, it is necessary to thoroughly understand and evaluate impacts to both ongoing and future operations.

Strict attention to problem reporting and change control will extend the lifetime and usefulness of the system and preserve its operational functionality as intended.

The operating agency must also be mindful that many of today's systems will experience anomalies during the life of the systems. After all, the basic platforms (computer and operating systems) do not exhibit the stability of systems used a decade ago. How often does one need to reboot a workstation? The agency should work with the software supplier to identify those problems needing immediate remedial action as well as those problems for which there may be acceptable work-arounds until the problem can be identified and fixed in subsequent releases. It is also worthy to note that some anomalies may be irreparable because they are the function of problems in the underlying operating system or other third party software (e.g., databases, development tools, browsers, run-time libraries) - and the ATMS software provider may be at the mercy of these vendors to correct the problem.

As noted above, software changes will necessitate significant regression testing with the associated costs. Unless the problem is mission critical, it should be packaged with managed software releases, at which time more extensive testing can take place.

Operational activities must also be planned for and conducted rigorously for the long term welfare of the system. System policies and procedures governing backup and data archiving must be designed to accommodate the hardware architecture (e.g., disk drive configuration, use of RAID, clustering, etc.) and the application's tolerance to down-time. The time spent making software backups, archiving system data, and managing the data is not truly appreciated until significant hardware failures or maintenance updates occur. The established policies will impact the time required to properly prepare for the maintenance activities as well as recover from hardware failures. These issues should be considered throughout the system life cycle in order to meet system performance and expense requirements. All of the computer systems will fail at some time or another; it is critical to establish operational procedures and verify (test) the backup procedures, backup media, and recovery procedures as part of the system acceptance testing. Maintenance

Maintenance is a system life-cycle process that is also governed by configuration management procedures that are documented in the configuration management plan and the system maintenance plan. Every system maintenance activity will require some level of post-maintenance testing to ensure the operational functionality has been preserved or restored. A detailed system maintenance plan should be developed in concert with the system requirements specification and system test plan since many elements of the system maintenance plan are contractually binding on the various providers of system components and services and, therefore, should be included in the procurement specification.

Hardware maintenance involves repair or replacement of malfunctioning components, usually without affecting the configuration of the system; such activities can be handled outside of the configuration control board using maintenance tickets and work orders. Only those hardware maintenance problems that require a configuration change are recorded on a system problem/change request form and must be worked through the configuration control board (CCB).

Software maintenance involves implementing changes to a controlled software baseline (release version) for the purposes of correcting errors (bug fixes), adapting to environmental changes (both data and interfaces), and implementing enhancements (adding new features and revising or deleting old ones). Once an operational version of the software is placed under configuration control, all changes, whether corrections, deletions, or enhancements, should first be recorded on an SPCR form and submitted to the CCB for approval. Another form of software maintenance involves updates to operating systems and other COTS software; these must be managed in the same manner as a typical bug or new feature because "simple" operating system upgrades can have unexpected consequences. Such updates must be carefully managed and be subjected to regression testing as well.

The system maintenance plan provides the procedures and forms necessary to report and track system maintenance activities. It enables the maintenance staff to perform preventative maintenance actions, to isolate reported malfunctions to the hardware and software component level, and to perform and record component removal, replacement, and/or software fixes and upgrades. It also outlines the procedures and parameters for notification of maintenance personnel as well as their responsibilities. The system maintenance plan also applies to any outside maintenance contractors and vendors or manufacturers.

The maintenance plan should apply to all maintenance activities associated with the operation of the system subsequent to formal acceptance by the acquiring agency. Delivered and/or installed components that have not completed formal acceptance testing are not typically covered by this plan. Where possible, the responsibility for maintenance, including warranty, repair and replacement, and re-installation, rests with the supplier or installation contractor until the acquiring agency has formally accepted the component. Such terms must be clearly spelled out in the procurement specification.

There are a number of routine maintenance issues and activities that should be specifically addressed in the maintenance plan, including:

  • Designating maintenance clearance notifications that specify who is responsible for making the notification and to whom.
  • Implementing hardware and software upgrades, including notifying operations and maintenance personnel of any operations and procedural changes.

It is important that a failure and repair process be established to handle suspect or defective field devices properly. One such approach is to include the purchase of bench testers that can be connected to the field device that will automatically test and verify all aspects of the device operation. This can be used to verify that devices removed from the field are truly defective; it can also be used to verify that units returned as "repaired" from the vendor can be tested to ensure that they comply with the original specifications. If this approach is taken, then the procurement specifications will need to identify the requirements for the bench testers, and these might include module testers (for DMS subassemblies) or full-blown test intersections with calibrated input simulators. If the agency has a significant number of devices, it may want to explore adding environmental testing capability to its shop so that field devices can be subjected to environmental testing either for repairs or as part of an incoming inspection and testing program. If this approach is taken, it will require space, electrical planning, and staff training,

Agencies typically find it useful to maintain a system "test-bed" to attempt to duplicate system anomalies and perform detailed investigations without impacting the operation of the production system. The test-bed system is ideal when it is an exact duplicate of the production system; however, financial considerations generally result in the test-bed being a scaled down installation of processors, communications equipment, and software. The greatest benefit of the test-bed is the ability to repeat tests without compromising the integrity of the production system and is well worth the additional expense. Where it cannot connect into the production system, the test-bed should include simulators to stress test the software and communications infrastructure so that the likelihood of problems when they are installed on the production system is minimized.

2.3.4 Products of the Testing Program

The testing program produces many documents to verify that the system requirements are incorporated into the as-built system. These include many "low-level" reports such as individual unit test reports, inventory inspection reports, interface testing, and sub-system test documents. At a higher level are the verification cross reference matrix and the system acceptance test documents that tie the entire system engineering process to the completion of the contract. The system problem/change request database is another product that should be maintained for the life of the system. It will provide a measure of the system's operation and can be an invaluable tool for tracking the resolution of system anomalies.

Organization of these products will have a significant impact on the success of the overall test program. In many cases these products will be referenced as higher level testing is conducted and during configuration management activities in the maintenance and operation life-cycle stages. It is well worth the effort to maintain these documents electronically so that they can be searched and referenced in an economical manner.

2.4 Summary

Throughout this chapter, the role that testing plays in the acquisition of a TMS and during the life cycle process has been emphasized. At each step, the importance of well-written, testable, non-ambiguous requirements has been stressed as being the foundation for both the procurement and the development of test procedures and test plans for the project. The origins of the requirements have been described as well as how and where they are documented. The chapter examined how test plans and test procedures are developed from these requirements and used for verification while the system is being developed and implemented as well as later during normal operation and maintenance. In addition, the importance of problem reporting and configuration management was discussed and identified as being critical to the long-term success of the project.

To re-iterate a basic theme of this handbook, testing is about verifying the requirements; without testable requirements, there is no basis for a test a program. The following chapters examine the practical aspects of system acquisition and hardware, software, and system testing.

1 Of course, this depends on the payment terms of the contract.

2 Ownership in this case refers to the project's changing from an implementation to an operational phase. Contract terms should anticipate this event and explicitly define each parties responsibilities with respect to its occurrence. This is especially true when the contract involves intellectual property, contractor maintenance periods, or "burn-in" periods.

3 A rule of thumb, if there isn't a written report, it didn't happen.

Previous | Next
Office of Operations