Office of Operations
21st Century Operations Using 21st Century Technologies

7. Software Testing

7.1 Overview

This chapter addresses the testing of the TMS software that resides on and executes from central computer systems (or servers) and personal workstations within the traffic management center (TMC) as well as remote TMCs and user personal computer systems with communication access to the TMS.

There are two major classes of software: operating system software and application software. The following describes each of these and how they are used in a typical TMS.

The operating system software provides:

  • The basic environment for command and inter process control, computational, communication, and data storage capabilities.
  • The services for device handlers and communication interfaces that support the application software.

The operating system may also include basic third party software such as relational database management software (RDBMS – such as Oracle), or other middleware such as Object Request Brokers (ORBs), backup and cluster utilities, and report generation utilities.

The application software provides the traffic management capabilities for the TMS including:

  • Field device management (traffic surveillance devices - which include vehicle detection devices and CCTV cameras as well as and traffic control and motorist information devices - traffic signal controllers, dynamic message signs, and highway advisory radio towers and beacons).
  • Centralized services (for incident and congestion management, traffic control and ramp metering coordination, graphical user interface and information display screens, video routing and display control, regional map displays, and relational data bases).
  • External user support (for center-to-center communications, information sharing, and device control).

Computer operating system software and some standard product application software (e.g., relational database management) are typically referred to as commercial-off-the shelf (COTS) software products. COTS products are designed for general use and many serve a number of different and diverse types of applications (e.g., banking, engineering, transportation, etc.). Also, most ITS device vendors have developed standard product software to provide for command and control of their devices. In most cases, this COTS software has undergone rigorous testing for a wide variety of different application environments and represents a large installed base of proven software. COTS software will usually not need to be tested to the same extent that modified or new (custom) application software that is modified or designed for your specific application and requirements. However, COTS products must be uniquely configured for installation and operation on your systems, and to support the other non-COTS (modified and custom) application software that provide the traffic management capabilities of the TMS. Therefore, COTS products must be included in software test program.

All software proposed for use in your TMS should be subjected to testing before it is accepted and used to support operations. The extent and thoroughness of that testing should be based on the maturity of that software and the risk you are assuming in including it in your operations.

For most agencies, the bulk of software testing, at various levels of completeness, will be done at the software supplier's facility and the agency will not participate. It is suggested that the procurement specifications should contain provisions that give the agency some confidence that a good suite of tests are actually performed, witnessed and properly documented. This may require the software supplier to provide a description of their software quality control process and how the performance of the tests for the agency's project will be documented. The agency should review the supplier's documented quality control process and sample test reports, and be comfortable with the risk associated with accepting them. Where possible, the agency should plan to send personnel to the developer's facility periodically to observe the progress and test the current "build" to ensure that the translation of requirements to code meets the intended operation.

The following sections discuss what software should be tested and when that testing should occur. These sections also describe software test scheduling considerations and other considerations for a software test program.

7.2 What Types of Testing Should Be Considered?

The testing for a software product can be broken down into the following general categories:

  • Design verification.
  • Functionality.
  • Prototype.
  • Standards compliance (ISO, CMM, NTCIP, and others).
  • Environmental.
  • Maintainability.

Each of these will be discussed to gain a better understanding of what is meant and what is required for each relative to software testing.

The following describes the elements of a complete testing program based on the assumption that the software product being offered is a new design or custom product and hence should be subjected to all aspects of requirements verification. After this initial discussion of the most intensive case testing program, this guide will consider what steps can be eliminated or minimized for standard products and modified products (see Section 5.3.2).

7.2.1 Design Verification

New or custom software will require design verification testing. The agency's procurement specification should include design requirements for developing the software and verifying the design at various points in the development and implementation phases. Typically, the procurement specification will reference or invoke process standards for how the design itself is to be developed, documented, and reviewed and implementation standards defining how that design will be implemented and tested (i.e., transformed into an operational requirements compliable product). While the agency's information technology (IT) staff may be both knowledgeable and experienced in software design and development, it is recommended that the agency hire a qualified and experienced ITS consultant or systems integrator to assist in the development of the software procurement specifications. When the acquiring or operating agency has extensive experience in software development and implementation on the scale required for a successful TMS project, this recommendation can be evaluated with respect to the internal staff's existing workload.

Standards help assure compatibility between systems (both hardware and software) and promote multi-vendor interoperability and ease of integration. However, if the agency's requirements include compliance with standards, then there must be test procedures to verify that compliance (see Section 7.2.6). With this in mind, the following describes where standards apply and how they are selected for design and implementation. Because of their importance to the development of a TMS, selection of manufacturer's extensions to the NTCIP standards are discussed here as well. An example DMS specification is also provided to illustrate how the NTCIP standards are applied.

7.2.1.1. Selecting Standards for Design

The software development plan (SDP) will detail the approach to software requirements analysis, design, and coding techniques for the development the TMS software. This plan describes the software development process, establishes the design, coding techniques and standards to be used, and details the development environment and configuration management practices to be followed. Because software development is an ongoing process that continues throughout the life cycle of a software system, the SDP will need to be updated periodically to reflect changes made to the development, test and operational environments, and applicable development procedures and standards. The procurement specification should establish who (the software developer, a system integrator, or the agency) develops the SDP. The procurement specification should also state that the agency will review and approve the SDP prior to the start of software product acquisition (i.e., COTS or ITS standard products) and the development of modified or new software. The software developer (and most system integrators) will have an in-house software development plan that includes software design standards. Instead of starting from scratch, the agency may wish to adopt a developer's or integrator's in-house plan after it has been updated and revised to reflect the specific requirements of this TMS project. The agency approved SDP should govern all software development for the TMS.

This plan should include, at a minimum, the following software design standards:

  • Coding standards.
  • Graphical user interface standards.
  • Geographical information standards.

7.2.1.2. Selecting Standards for an Implementation

The agency should inform the developer which standards to use in an implementation via the procurement specification; however, the agency (or its consultant) and the developer must be familiar enough with the standards to ensure that the implementation is consistent with the intended purpose of the standard. For example, the Common Object Request Broker (CORBA) standard typically invoked for object-oriented code development is not intended for field device communications using the "object definition standards." Another example would be a request for a particular function for which none of the standards have a corresponding data element(s).

Frequently, implementation standards will also include requirements for specific COTS software packages such as operating systems and databases. When the acquiring agency already has internal support for those COTS software package and desires to minimize its long-term support costs, it should consider requiring that the implementation be based on those existing internally supported software packages.

Another reason for requiring specific COTS software packages is the packages' conformance to other technology standards. For instance, the acquiring agency may specify an open standard such as POSIX and require that the operating system be POSIX compliant.

One word of caution; such selections must be explicit to reference specific versions of the COTS packages and recognize that such requests may be incompatible with the hardware platform since the COTS products tend to be constantly upgrading along with the hardware. Further, mandating specific COTS products may significantly increase the development costs and establish the need for ongoing software maintenance contracts at a considerable cost to the agency. As a result, each COTS "requirement" in the specifications should trace back to some important department/agency benefit.

7.2.1.3. Selecting Manufacturer Extensions

The agency must also determine whether the features of the selected standard(s) support all of the functional features that are being required for a particular ITS device. The NTCIP standards and framework are of particular importance to the development of a TMS. They are designed to allow for innovations to keep pace with advances in ITS technology; however, these standards do not currently define standardized data elements for every technology or functional feature of every device. In fact, these standards were designed to allow for future flexibility with the addition of custom "objects" and special, non-standard features.

The developer, acting for the agency, must determine if there are special features of the subject device that are not yet standardized by the NTCIP. If such features are present, then the developer will need to determine precisely how these features will be supported without conflicting with the standardized implementations. (It should be noted that the use of manufacturer specific extensions might tie the agency to a single source of software for all similar devices in the system.) Usually, this adaptation is accomplished by simply extending the capabilities of existing features of the standard, or by defining additional data elements or features under a developer-specific or agency-specific node for these specific management information base (MIB) extensions. It is important that the agency be aware of the use of these benign extensions and request that the systems developers or integrators clearly identify these in their implementation.

Another style of extending the standard might be based on replacement of a partially incomplete feature with a complete custom feature-this would be considered an unsupportable or malignant extension as it defeats the purpose and goals of any open standardization effort: interoperability and interchangeability. An implementation that uses benign extensions is likely to achieve a level of conformity with known exceptions; for example, where the specific extensions are listed. However, an implementation that includes unsupportable extensions, for example, replacement of the standard's features with custom features, will not achieve conformity as this would mislead customers and negatively impact the ability to achieve interoperable and interchangeable ITS products.

In any case, if specific benign or malignant extensions have been introduced and the agency wants to have the associated functions available in future purchases of the same device type, it is imperative that these extensions are made part of the agency's specifications and documentation deliverables. This requires that the agency obtain redistribution and/or re-use rights to these MIB extensions, even if the original manufacturers, vendors, or integrators developed and implemented them. Additionally, the agency should obtain both electronic and paper copies of the entire MIB, including the manufacturer-specific extensions. Negotiating the rights for re-distribution and/or re-use, along with documenting the requirements for MIB delivery, is much easier to complete up front in the procurement process rather than after the fact. The procurement specifications should include provisions that specifically address this eventuality.

These same concerns hold true for other COTS software products. There are usually base standards (such as CORBA) that are industry recognized. However, some vendors will enhance their product with non-standard extensions. Programmers tend to like this approach as it simplifies their work, but the cost can be very high when it becomes time to upgrade or migrate to newer platforms. The system becomes wedded to a specific vendor and if the same extensions are not available in the next version or conflict with the standards, then upgrading becomes very costly and time consuming. Where standards are invoked (Web services, CORBA, etc.) it is important that only the basic standard be used and that custom extensions be avoided.

7.2.1.4. Development Resources

There are wide varieties of resources available that relate to the NTCIP standards. The following lists some of the resource materials that have been used in the development process and early implementations, as well as the location of developed materials.

Websites

A wide range of documentation is available on the World Wide Web NTCIP Home Page located at www.ntcip.org.

The site currently includes such items as:

  • NTCIP guide (9001).
  • NTCIP profile publications.
  • NTCIP data element definitions for a variety of devices.
  • NTCIP case studies.
  • Various white papers written during the development of the initial standards.
  • FHWA-sponsored software packages, for example, NTCIP demonstration, NTCIP Exerciser and NTCIP Field Devices Simulator.

Other web sites of interest are shown in the following table.

These sources provide copies of the various standards and the TMDD guide describes the process of selecting and configuring the standards for use in procurement specifications. There currently is no testing program tied directly to the NTCIP and related standards. The testing and conformity assessment working group (TCA) developed a testing documentation guide, NTCIP 8007, and a user guide to NTCIP testing - NTCIP 9012. These can be used by public agencies and integrators as guides to the development of test procedures for the evaluation of both central software and field devices.

Table 7-1. NTCIP-Related Web Sites
Web Site Address Description
NTCIP www.ntcip.org The official web site for NTCIP and related publications and information
DATEX-ASN www.trevilon.com/library.htm The web site for DATEX-ASN documents and information
DATEX-Net www.datex.org The web site of the DATEX-Net Standard currently in use in Europe.
IANA www.iana.org/numbers.html The Internet Assigned Numbers Authority web site.
IEEE http://standards.ieee.org Links to all of the IEEE standards efforts, including ATIS, Incident Management, Data Dictionaries and Data Registries.
ISO www.iso.ch The Official ISO home page.
ITE www.ite.org ITE web site – go to the technical area and standards which include the TMDD and the ATC standards.
ITS America www.itsa.org The home page for ITS America.
NEMA Standards www.nema.org/index_nema.cfm/707/ Site for ordering NTCIP standards. This is also a site for ordering the commonly used NEMA standards such as TS4 and TS2.
RFC Index www.nexor.com/public/rfc/index/rfc.html A search engine for all of the Internet RFCs.
SNMP www.cmu.edu A library of information on SNMP and related topics.
TCIP www.apta.com The home page for Transit Communications Interface Profiles.


Sources of Public Domain Software

There are two basic prototype implementations of NTCIP software. Neither of these packages was designed to operate a real system; rather, they were designed to provide tools to the industry to test equipment submitted as being compliant with a specific protocol. Unfortunately, there is no ongoing program to maintain these packages. They are available with documentation for downloading at www.ntcip.org/library/software/. Integrators may find them useful as a reference but these are not intended as products since their development was halted and has not kept up with the latest development in the NTCIP standards arena.

NTCIP Exerciser Software, Build 3.3b7a

This NTCIP Exerciser is able to read in a properly formatted management information base (MIB) from a floppy disk and support the exchange of fully conformant NTCIP messages under the direction of the operator. The package supports the creation of simple macros to enable the user to perform a number of operations sequentially and to record the results. The current version supports the simulation of either a management station (funded by the FHWA) or an agent (funded by Virginia DOT). It currently supports the STMF Application Profile (SNMP only), Null Transport Profile and both the PMPP-232 Subnetwork Profile and the PPP Subnetwork Profile. The most recent version of this software is available for free on the NTCIP website. It is designed for Windows NT.

Field Device Simulator (FDS), Beta 2

The FHWA also developed a DOS-based program to emulate a field device that supports the data elements contained in the global object definitions. This program supports the STMF Application Profile (SNMP-only), the Null Transport Profile and the PMPP-232 Subnetwork Profile. This software is available for free on the NTCIP website.

Application of the NTCIP Standards

Appendix D contains an example application of the NTCIP Standards.

7.2.2 Prototyping

A valuable tool in design verification for new and custom software is the development of prototype software. Prototype software allows design concepts to be verified before committing resources to developing the complete code unit that will implement that design. Prototype testing reduces the development risk, particularly for a new design, and increases the chances that the design can be successfully implemented. The procurement specification should require prototype testing for:

  • Communication protocols.
  • Device handlers (drivers).
  • User interface.

Communication protocols and device handlers typically have very detailed data handling and timing requirements. The user interface (displays, GUI screens, and GIS maps) has a very demanding common "look and feel," and interactive response requirements. Prototype testing will require that test procedures be developed by the developer and approved by the agency to verify that these requirements are being met. Prototype testing also allows the agency to provide design feedback to fix undesirable aspects of the design before full implementation. It should be noted that prototype testing has the potential to uncover missing, incomplete or inadequate requirements. Identifying and resolving these deficiencies early may necessitate a revision of the requirements specification. However, it avoids additional or more complex testing, and changes to delivery schedules and project costs, and therefore mitigates their programmatic impacts.

One potentially significant drawback to prototype testing is "requirements creep." That is, once the developer has revealed the design and proposed implementation (at least partially) with a prototype, the agency (or its consultant) may not like what they "see" or say that it is not what they "expected, " even though the implementation meets the current specification requirements. This can occur if there is a misunderstanding of the requirements or the requirements were poorly written (vague or ambiguous). This can also occur when people from a completely different frame of reference (programmers vs. traffic engineers) interpret the requirements. A requirement change will typically result in additional cost to the agency (even if the feature or capability is reduced or eliminated). Most agencies are well aware of this fact and will try to get the developer to accept their interpretation of the subject requirements as written (without contract changes). Many developers will accede to the agency's interpretation, make the necessary design or implementation changes, and proceed with development as long as those changes are relatively minor and don't significantly impact the developer's schedule and costs. Unfortunately, this practice leads to undocumented-or documented but not delivered and, therefore, un-verifiable-requirements. Therefore, it is important that the requirements as well as test procedures be updated based on such changes. Prototyping is encouraged whenever the agency cannot "see" the planned implementation prior to deployment and is particularly important for reports and user interface interactions. One word of caution: it is important that the performance of the prototype be no better than the performance expected from the production system.

The agency must be cognizant of the fact that requirements will change during design and even during development and implementation. These changes should be managed through the configuration management process so they are fully documented, their impacts are assessed, and changes made to the appropriate requirements documents, test plans and procedures, schedules and procurement specifications or purchase orders. These will change the baseline against which the final system is judged.

Since prototyping is significant value, the project work plan and schedule need to allow for both time and budget for possible changes to be introduced into the system. Even when "off the shelf" products are used, an agency may request that minor changes be made to accommodate their specific needs. As long as the changes are documented, evaluated for cost and schedule impact, the process is manageable.

7.2.3 Environmental

Environmental testing verifies that the product operates properly (i.e., meets requirements) in the installed environment. For software, the "environment" is the target processor (platform or server), and for non-operating system software products, it is the installed operating system. This aspect of testing is usually the most extensive and complicated required for any product. For large complex systems such as a TMS, there are two distinct test environments: the development environment and the operational (field or production) environment. Each of these environments and the kinds of product testing that can be accomplished is discussed below.

7.2.3.1. Development Environment

The development environment is usually established at the software developer's facility and will utilize host processors, operating systems, and software development tools that the developer has or acquires specifically to support this TMS project. It will also include office space for the development staff and support technicians; and an equipment room to house the processors and computer peripheral equipment (e.g., monitors and keyboards, printers, local area network devices, etc.), equipment racks and floor space for representative TMC equipment (e.g., workstations and display devices), and typical field devices (e.g., communications devices, traffic controllers, CCTV cameras, etc.). The robustness of the development environment and the extent to which it is representative of the operational environment will dictate what testing can be accomplished.

Development environments are expensive to setup, operate, and maintain. The agency's procurement specifications must clearly define what is expected, when and what costs will be borne by the agency. For example, operating systems, relational databases, and software development tools are vendor-licensed products (either one-time or renewable on an annual basis). These product vendors constantly issue updates to fix problems and upgrades to add new features and capabilities to their products. The updates in the form of patches are free as long as the software product license is current. However, upgrades in the form of new releases are provided at extra cost, although they are usually offered at a reduced price to current licensees. Most vendors will only provide maintenance support for the last three software revisions. Developers will often have their own software licenses, but eventually the agency will have to assume these costs for the production system. The decision to accept and install a patch or purchase and install a new revision is not always straightforward, but will have definite downstream ramifications particularly with respect to testing. Software tested in the pre-patch or pre-upgrade environment will have to be re-tested in the new development environment.

The development environment should be separate and distinct from the operational environment, particularly for a project that will be incrementally deployed. Its configuration (both hardware and software), like the operation environment, should be well documented, and all changes to that environment configuration managed through the configuration management process. A separate and distinct development environment accomplishes two very important goals. First, it allows the software to be tested from prototype versions through the latest build release in a controlled environment (i.e., with repeatable simulated inputs and events) without impacting ongoing operations with the production system (e.g., no system crashes that may disrupt traffic flow or affect incident response, and no loss of operational data while the system is down). Second, it prevents polluting the production database with simulated data (e.g., DMS test messages; simulated volume, occupancy, and speeds; congestion and incident events and test response plans; and event logs).

The development environment will also be used to further investigate, resolve, and test solutions for software or hardware/software problems found in the production environment. To maintain configuration control, all problems, whether found in the production system or development system, should be recorded on the system problem/change request (SPCR) forms and processed by the change control board under the configuration management process.

The development environment must be sustained even after final system acceptance, at least at some minimal level, for the life of the TMS in order to maintain the software and develop enhancements. Whether the development environment is left with the developer, a system integrator, or transferred to the agency's facilities will depend on what level of system maintenance and future enhancements the agency is comfortable with and can staff with its own personnel and/or a maintenance contractor. Note that transferring the development environment to the agency will also necessitate the purchase of the appropriate software licenses as the developer is likely to need to retain the original licenses to ensure the ability to support their maintenance obligations.

It is recommended that even if the development environment is maintained at the developer's facility, the agency should consider the purchase of a test environment at the TMC. This is invaluable as a training aid and for the evaluation of future updates by the agency. Such as system must include simulators and/or connections to the production system to support data collection and analysis that matches the production system. The more robust the test system, the more likely the agency will be able to minimize disruptions when the new software is installed on the production (operational) system. However, as noted above, the agency will have to purchase software licenses for all COTS products. It is best if this requirement is included as part of the procurement specifications so that the issue of license costs and the hardware configuration will more closely match that of the operational environment. Note that as upgrades are required for the operational hardware, the test environment must also be updated so that it always reflects the current production system.

7.2.3.2. Operational Environment

The operational environment is the actual TMC and field environment that the delivered software will be installed on and operated in for the life of the TMS project. Software acceptance testing will be performed in this environment under actual operational conditions using the system's communication infrastructure and installed field devices. This acceptance testing includes:

  • Software build releases (of the latest TMS software version that includes new features and functionality, SPCR fixes, and operating system patches or new versions).
  • Hardware/software integration.
  • Final system acceptance testing.

This may be the first opportunity to fully exercise and verify features and functionality not possible in the development environment. Expect some problems to be uncovered in testing in the operational environment; however, they should not be showstoppers if prior testing in the development environment was detailed and thorough. The last two versions of previously tested and accepted build releases of the software and their associated databases should be retained in case the new version experiences a problem that cannot be tolerated. This allows a previously accepted version to be re-installed and operated until the new version can be fixed. Note an older version may not have a capability, feature, or functionality that is desirable and working in the new version, so it is a judgment call whether to live with (or work around) the operational problems in the new version or revert to a previous version. The new version can be conditionally accepted by the agency, with the understanding that final acceptance of this version will be withheld until the problem is found, fixed and re-tested. Again, it is paramount that strict configuration management processes be followed in this environment. Undocumented and unapproved changes to the configuration will compromise testing that has been accomplished as well as testing that has been planned.

Each time a new build release (or a previous release) of the software is installed in the operational environment, major elements of the system's functionality must be shut down or suspended. The software installation takes time (sometimes hours) and some level of regression testing (see section 4.4.8) should be performed before trying to restart it to support operations. Acceptance testing of SPCRs, new features and functionality, etc. can usually be delayed until a more convenient time. The immediate need is to complete the regression testing to verify that all of the original functionality is uncompromised. The impacts to ongoing traffic management operations will depend on the duration of the operations interruption and what operational backup or fail-over capabilities have been incorporated into the TMS design. Suffice to say, this is typically a major disruption to ongoing activities and should be performed when the impacts may be minimized (i.e., at night or on a weekend). It is also likely, that the agency will want to "collect" a significant number of "changes" before updating the production system.

7.2.4 Functionality

Functionality testing verifies that the software performs all of the specified operations listed in the requirements. At the software component and integrated software chain levels, this testing can usually be accomplished using test software to simulate inputs and verify outputs. That is, to verify data handling and computational requirements such as the proper storage and retrieval of DMS messages in the database and verification of incident detection algorithms. At higher levels of testing such as hardware/software integration and system testing, some representation of the system's hardware, communications infrastructure, and field devices will be required to verify operational requirements.

Software simulators of hardware interfaces and field devices are sometimes used if they exist or can be cobbled together without their own extensive development program. These are usually of limited value except where there are long lead times for actual system hardware and field devices. In that instance, software simulators will allow software functional testing to proceed (to raise confidence levels that the design works as intended or to uncover potential problems), but much if not all of this testing will have to be repeated with the actual hardware and field devices.

A note of caution here — software simulators will themselves need to be verified (against design and implementation requirements reviewed and approved by someone – preferably the agency). If any test results or analyses derived from their use is suspect, it could lead to solving problems that don't really exist and missing those that do. It's much better to use actual hardware and field devices when testing software functionality. However, controlling the test conditions, particularly for field devices, can be daunting unless testing is performed in the development (or test) environment.

When dealing with functionality testing involving communications subsystems, it can be difficult and expensive to build and maintain simulators. One means to reducing these costs is to specify that device vendors provide an additional special firmware package that responds to central polls for all addresses on the channel except the address configured for the device. This allows the system to communicate to all channel addresses using only two devices. The central system database can be configured with one database configuration for all devices on the channel. This configuration can be used to test communication protocols, communication performance, and system response to major simultaneous events. Care must be taken with this type of configuration to ensure that the field device has the processing resources to support the communications traffic and that the special firmware stays within the development/test environment. Accidentally deploying this modified firmware in the production environment would result in major operational issues for the system.

As with hardware functionality testing, software functionality testing will also be extensive, but it is virtually impossible to completely verify the required functionality under all possible combinations of expected operational conditions. At the system level, the test plans and procedures should address a few of most likely and potentially demanding operational circumstances to verify both operational and performance functionality. Examples include operations during peak traffic conditions with multiple simultaneous incidents, interoperability between multiple TMCs (if applicable), and following a failover recovery at a backup TMC and subsequent transfer of control back to the primary TMC. This will also be a good time to assess the agency's operational procedures, staffing levels, and operator training. The number and complexity of test scenarios to use in the verification of functional requirements and the time and budget devoted to this effort must be weighed against the risk of having a problem that goes undetected because some combination of possible operational conditions wasn't included in the testing scenarios.

When attempting to determine what testing is important, one might consider some of the following:

  1. Can multiple operators view the same incident tracking form?
  2. Are the operators prevented from modifying it while another operator handling the incident enters updates? In other words, does the system handle operator contention for the same data or resource?
  3. When operating the pan, tilt, and zoom controls for a remote CCTV camera, does the video image from that camera indicate the camera is responding without a significant time lag to the commanded operations?
  4. How do the network loading and the number of concurrent users affect the operation of the system?
  5. When workstations "crash" or when there are network disruptions, do the servers recover when the workstation is restarted or the network is restored? Is it necessary to restart the whole system?
  6. If the system experiences a "problem," are the operators alerted to the situation so they can take corrective action?

These scenarios represent typical operational issues that the system design should accommodate. Requirements should be derived from these scenarios during the design process so that they can be properly incorporated into the system design and test planning.

7.2.5 Performance

In addition to functional testing, verification of software performance requirements is also required. Performance requirements specify things such as the interactive response time between an operator command input and a display update or the change that comes as a response to that command input, the maximum time interval between regional map display updates, and the minimum number of traffic controllers that the system can effectively control and coordinate. Performance requirements are qualitative (i.e., measurable) and apply under specific environmental conditions. For software performance requirements, the applicable environmental conditions typically relate to the operational environment as opposed to the development environment and apply to the quality and timeliness of the service provided to the end users. Verifying performance requirements will most likely require making accurate measurements and performing some quantitative analysis.

The following are some examples of desirable performance characteristics/requirements that need to be addressed (and verified) to maintain and assure responsiveness to operator actions and provide for near real-time command and data exchanges between the traffic management center and the various system hardware and software components.

Graphical User Interface (GUI) Control Screens — the primary operator interface is the GUI control screen. It is imperative that the operators receive timely if not immediate feedback to mouse button and keyboard entries. Where there is an expected delay of more than two or three seconds between the operator entry and the command response, some mechanism needs to implemented to let the operator know that the command has been received and is being processed. A simple response showing the button depressed/released, shading or color change and even the hourglass wait symbol is sufficient. Without this mechanism the operator may continue to enter additional unnecessary and probably undesirable inputs.

Closed Circuit Television (CCTV) Camera Control — interactive control of the pan/tilt/zoom, iris and focus features and today's Internet Protocol (IP) streaming video will place the highest demands on the communications infrastructure. The video image feedback during camera control operations should be as close to real-time as possible to avoid command overshoot.

Detector Data Acquisition — vehicle detection stations are typically capable of storing data locally and transmitting the data from the field devices in response to a polling command. In order for this data to be effectively utilized by the system's incident and congestion detection algorithms, a polling cycle of approximately 20 to 30 seconds is necessary, although some systems with lesser bandwidth to the field devices may fall back to once per minute.

Automated Vehicle Identification and Location Systems — data from these system sensors is time sensitive but is typically stored and time tagged by the field device. The system need only poll these devices as needed to effectively utilize their data before the local storage buffers overflow.

Traffic Signals, Lane Control Signs, Dynamic Message Signs, Highway Advisory Radio, etc. - these system devices are commanded to change timing patterns, messages, etc. as necessary, but not at a once-per-second command rate. Timing plans and messages are downloaded at nominal data rates typically requiring many seconds to complete a download. They are cyclically polled to check control mode and current status. The specifications should state the requirements, and the system design, primarily the communications infrastructure, should ensure that these requirements can be met through the distribution of devices on various communications media.

7.2.6 Standards Compliance

Where the procurement specifications require that the software comply with a specific standard, the test plan and test procedures must include steps to confirm that compliance. Software standards typically refer to processes that must be followed, such as coding for inter-process communications, communication protocols, and documentation. Verification that the process standards are met can be accomplished by inspection (i.e., at design reviews and code walkthroughs), by analysis of data exchanges between software components and between software components and hardware devices, and by demonstration; for example, a requirement that the GUI screens all have the same attributes (e.g., color, shape, style) with respect to pull down lists, pop-up windows, etc. Other process standards refer to the methodology for the software development process itself and relate to quality assurance. Some of these are listed below. Verification of compliance with these standards is accomplished by inspection.

The following are software development, test, and quality assurance standards that should be considered for incorporation (by reference) in software procurement specifications. Requiring certification or compliance with these standards does not ensure a quality product, but does serve notice that the agency expects the required certification and/or rating level to be maintained by the development organization and that required documentation, test and quality assurance standards be met. Where the integrator or developer does not hold the "certification," they should be required to provide documentation of their software development processes, testing processes, and configuration management procedures. What is important is that the developers have such procedures and follow their internal procedures rather than any specific certification.

This material is included here because it falls to the agency to conduct "tests" or inspections to verify that these requirements are being met.

ISO 9001:2000

The International Organization for Standardization (ISO) 9001:2000 quality standard addresses quality systems that are assessed by outside auditors. It applies to software development organizations (as well a many other kinds of production and manufacturing organizations) and covers documentation, design, development, production, testing, installation, servicing, and other processes. A third-party auditor assesses an organization and awards an ISO 9001:2000 certification (good for 3 years, after which a complete reassessment is required) indicating that the organization follows documented processes.

Note that this is an expensive process and it is not sufficient that the "firm" have this certification, it is important that the specific group developing the TMS software be certified or have at least established the appropriate quality control procedures.

CMMI

The Capability Maturity Model Integration (CMMI)33 developed by the Software Engineering Institute at Carnegie-Mellon University is a process improvement model that determines the effectiveness of delivering quality software. The model has five levels of process maturity defined as follows:

Level 1 – Characterized by chaos, periodic panics, and heroic efforts by individuals to successfully complete projects. Few if any processes in place - successes may not be repeatable.

Level 2 – Software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

Level 3 – Standard software development and maintenance processes are integrated throughout an organization. A Software Engineering Process Group is in place to oversee software processes, and training programs are used to ensure understanding and compliance.

Level 4 – Metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.

Level 5 – The focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

Organizations can receive CMMI ratings (equivalent to one of the five levels) by undergoing assessments by qualified auditors. (A minimum CMMI rating of Level 2 is recommended if compliance with this standard is required.) Again, this is an expensive process, and what is important is that the Quality Assurance procedures are in place and are followed by the specific software development team that will be working on the TMS software.

IEEE/ANSI Standards

The Institute of Electrical and Electronics Engineers (IEEE) in association with the American National Standards Institute (ANSI) creates software related standards such as the IEEE Standard for Software Test Documentation (IEEE/ANSI Standard 829), the IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), the IEEE Standard for Quality Assurance Plans (IEEE/ANSI Standard 730), and others.

7.2.7 Software Maintenance and Maintainability

Software maintenance involves implementing changes to a controlled software baseline (release version) for the purposes of correcting errors (i.e.,bug fixes), adapting to the system's changing environment (e.g., external interfaces) and implementing enhancements (i.e., adding new features and revising or deleting old ones). Once an operational version of the software is placed under configuration control, all changes, whether corrections, deletions, or enhancements, should first be recorded on an SPCR form and submitted to the configuration control board (CCB) for approval.

There are three categories of software to be maintained: commercial-off-the-shelf (COTS) products (e.g., operating systems, databases, communications middleware, development tools, etc.), ITS standard product software, and your system's unique software. Each requires a different maintenance and acceptance test approach.

COTS software is typically maintained by the manufacturer or developer under a written maintenance agreement or product use license and is warranted to the original purchaser. During the term of the agreement and renewal periods, if any, the manufacturer will advise of known and reported problems and the necessary corrective actions, which may include patches, partial updates, or new releases. The COTS user (i.e., the agency) is responsible for implementing the corrective actions unless specifically covered by the maintenance agreement. To receive manufacturer support and receive reduced prices for upgrades and new releases, it is very important to execute software maintenance agreements prior to expiration of the initial warranty period and renew extensions of the agreements before they lapse.

Maintenance of operational software that is acquired from other TMS or ITS libraries and TMS consortiums is the responsibility of the consortium members and specifically the developer of the components and/or modifications and enhancements to them. Software updates are typically made available upon request by consortium members, who must then deal with any compatibility and release level issues, and the integration and installation of the updates on their respective systems and environments. Because the acquiring agency will be responsible for integration, installation, and maintenance of this software, all available documentation, including requirement specifications, design, code listings, installation and test procedures, test results, and user's guides should be requested and reviewed before attempting to include software from these sources in your system. Missing, incomplete, or inadequate documentation will have to be generated and/or brought up to your system's standards in order for that software to be acceptance tested, brought under configuration control, and maintained. There may be restrictions on the use or further distribution of this software or licensing agreements for its use. These potential issues should also be carefully considered with respect to how they may affect your intended usage.

System unique software is all of the operational software developed or explicitly modified for use in your system. By agreement with ITS libraries and TMS consortiums, system unique software is usually made available to ITS libraries and for use by TMS consortium members at their own risk.

Typically the system software development team (comprising agency and software developer or integrator personnel) performs software maintenance for the system unique software at the direction of the configuration control board, and in accordance with the configuration management plan and the system maintenance plan.

In order for maintenance activities to be carried out effectively and efficiently, it is important that the procurement specification include some software maintainability requirements. Software maintainability requirements are typically included in the software development plan. They are verified by reviewing compliance with applicable design and implementation standards, documentation requirements, and the procedures in place for acceptance testing and configuration management. The ability to maintain the software requires extensive knowledge of the software architecture as well as the build environment. Build processes transform the source code into distributable files and packages them for distribution and installation on the system's various computing platforms.

Specific maintainability requirements that should be included in individual COTS software product specifications include high-quality documentation (user guides, operations and maintenance manuals), the ability to readily configure the product for the intended operational platform and TMS application, and technical support for product installation, initial operations, and problem analysis, including help desk telephone numbers. For example, for geographical information system (GIS) map display software, the procurement specification should require the vendor to assist the agency and the software developer or system integrator in implementing the base map. This should include defining map zoom characteristics and map display layers showing various static features, dynamic or animated features, and refreshing and distributing the GIS map display throughout the TMS operational environment.

Because the base map and regional features change over time, the procurement specification should include provisions for periodically updating the base map. COTS vendors will typically cap the technical support by limiting the number of support hours (both off-site and on-site, including labor, travel, and per diem costs for their technicians) and/or provide a quoted rate for that support. However if technical support is covered in the procurement specification, some acceptance criteria for that support should be specified such that the product can be accepted or rejected as unsuitable for the intended purpose.

For non-COTS software, i.e., modified or new system unique software, the procurement specification should include provisions for help desk and on-call maintenance. Here the criticality of the product to ongoing operations will dictate the specific provisions necessary. A shorter response time will be necessary for the basic TMS operations and control features and on-call (emergency) support will be needed for major problems that cannot be resolved over the telephone using the help desk support. Discussions with other users of this or similar products from this software developer or system integrator can aid in establishing the initial provisions for help desk and on-call support to be included in the procurement specification, and these provisions should be adjusted in maintenance contract renewals based on actual operational experience. Again, some acceptance criteria for that support should be specified and in this case should include measurable qualifiers such as availability and adequacy of the expertise for help desk calls (24/7, initial response and call back) and specific response times (e.g., 2 hours for 8-5 non-holiday weekdays, 4 hours for non-holiday weekend days and weekday nights, and 8 hours for holidays) for on-call software support personnel to show up on-site ready to diagnose and resolve problems.

Note that what is important from this section is that the procuring agency needs to understand (and include) the specification requirements and conduct reviews and evaluations of the vendor's conformance to these requirements.

7.3 When Should Testing Occur?

In the previous section a discussion of each of the general categories of software testing was presented. In this section the chronology of software testing is discussed. Depending on the maturity of the product, not all of the test phases described below will be required. Each of the following discussions will provide some guidance on the testing required based on the maturity of the product.

7.3.1 Acceptance of Previous Tests

There will be few if any opportunities to consider acceptance of previous software test results that would be directly applicable to your TMS project. Even COTS software will have to be subjected to unit testing prior to acceptance for integration with other system-unique software. For a stand-alone, standard product application installed by the vendor, such as a DMS subsystem (running on its own platform with direct communication interfaces to the field devices that will not be integrated with other system software), the procurement specification should require a complete suite of both hardware and software functional and operational tests at the subsystem level. This testing should be conducted on-site following installation to verify all specification requirements. In this case, the vendor would develop the subsystem test plan and all subsystem test procedures, and the agency would approve them prior to conducting the acceptance testing. Since DMS testing for your TMS project would only be performed at the DMS subsystem level, the agency is effectively accepting the lower level software testing, i.e., unit, software build integration, and hardware/software integration testing performed by the vendor prior to shipping to the installation site.

For a mature stand-alone standard product that does not require custom software to meet the agency's specification requirements, agency acceptance testing at the subsystem level should be acceptable. If the agency plans to maintain the software in this subsystem, it should require full software documentation, including requirements, design, source code, implementation details, and test procedures in the procurement specification. Vendors will not want to provide proprietary documentation or source code for their standard products. So some provision to access these may be necessary to develop bug fixes and enhancements, and should be included in the procurement specification when applicable. One possible solution is to escrow34 a copy with an independent third party. Previous software test results for ITS products acquired from an ITS library or other TMS projects should not be accepted (even if those products can be used "as is" without modification). These software products should be subjected to the same testing required for COTS products.

7.3.2 Unit Testing

This is the lowest level of testing for computer software components delivered for software build integration testing. For modified or new components, the software developer conducts stand-alone software unit tests following design walk-throughs and code inspections. At this level, the software design is verified to be consistent with the software detailed design document. Unit testing is typically documented in software development folders. Functional testing may be limited due to the fidelity of the test environment (usually constrained by what is available in the development environment) and the ability to provide the required inputs and respond to outputs. Test software is often utilized to simulate these, particularly where it is necessary to verify specific data handling or interface requirements and algorithms. Receiving inspections and functional checkout are performed for COTS software to assure that these components are operational and in compliance with their respective specifications.

7.3.3 Software Build Integration Testing

Software build integration testing is performed on the software components that pass unit testing and are suitable for being combined and integrated into the deliverable computer software configuration items. Additional functional testing is usually possible at this level, especially for inter-process communication and response requirements. A software build typically consists of multiple items and is ideally tested in the development environment as opposed to the operational environment. This is not always possible due to the expense of duplicate hardware platforms and communications infrastructure. A software build that has passed this level of testing is called a build release. There may be multiple build releases that compose a new software version.

7.3.4 Hardware / Software Integration Testing

Hardware/software integration testing is performed on hardware and software configuration items that have passed hardware integration tests and software build integration tests, respectively, and that have subsequently been integrated into functional chains and subsystems. Hardware and software integration testing is performed to exercise and test the hardware and software interfaces and verify the operational functionality in accordance with the requirements contained in the specifications. Integration testing is performed according to the integration test procedures developed for a specific software (build or version) release and hardware configuration. Testing is typically executed on the operational (production) system unless the development environment is sufficiently robust to support the required interface testing.

7.4 Software Test Phases

In general, the software test program can be broken into three phases as described below.

  1. Design Reviews - There are two major design reviews: (1) the preliminary design review conducted after completion and submittal of the high-level design documents and (2) the detailed design (or critical) review conducted after submission of the detailed design documents.
  2. Development Testing - For software, this includes prototype testing, unit testing, and software build integration testing. This testing is normally conducted at the software developer's facility.
  3. Site Testing - This includes hardware/software integration testing, subsystem testing, and system testing. Some integration testing can be conducted in a development environment that has been augmented to include representative system hardware elements (an integration facility) but must be completed at the final installation site (TMC) with communications connectivity to the field devices.

The following sections will further detail these phases and what to expect in each.

7.4.1 Design Reviews

For new or custom software, the procurement specification should require the software developer to provide both high-level and detailed software design documents. Following the submission of the high-level software design document, a preliminary design review is held to allow the agency to approve the design approach, basic architectural concepts, interfaces, and allocation of specification requirements to the configuration items that will become the contract deliverables. Agency approval of the high-level design document is required to proceed with the detailed software design.

The detailed design review follows the submission of the detailed design document that completes the software design (by detailing the design to the computer software component level) and the behavior and interfaces for all computer software components within their respective configuration items. Each computer software components defined is traceable back to the requirements allocated to a configuration item from the software requirement or procurement specifications. Approval of the detailed design document by the agency provides a go-ahead to the developer for coding.

Design reviews can be held at the agency's facility or the developer's. In either case, there will be travel and labor expenses associated with the reviews. A system integrator, if involved, should be required to review all design documents submitted, provide written comments to the agency prior to the review, attend both design reviews, and participate in the design approval process. The agency should integrate its written comments with those of the system integrator (if applicable) and formally provide them to the software developer at least 2 weeks prior to the preliminary design review and 1 month prior to the detailed design review. The software developer should be required to prepare a written response to each of the agency's formally submitted comments at least 3 business days before each review and to respond to comments recorded at the review within 2 weeks of the review. It is recommended that the detailed design review be held at the agency's facility to allow greater participation by agency technical and managerial staff. This will be the last chance to catch design oversights and shortcomings prior to committing to design implementation.

7.4.2 Development Testing

As defined above, development testing includes prototype testing, unit testing, and software build integration testing for single or multiple configuration items. This testing is normally done in the development environment established within the software developer's facilities. The agency or its system integration contractor will want to witness some of this testing, but probably not all of it. The agency may wish to review some of the software development folders (which contain development test procedures and test results) in lieu of directly participating in the development testing. The software developer should be required to provide test reports for completed tests and periodic test status reports detailing what testing has been completed and the status of testing to be completed and proposed schedules. Successful completion of development testing allows those components to be delivered for site testing.

It is important that the agency or its consultant review these test procedures to ensure that they adequately represent the intended system operation and are robust enough to actually stress the software and test "normal" and corrupted data and operator actions.

7.4.3 Site Testing

Site testing is testing that is performed at the final installation or operational site and includes hardware/software integration testing, subsystem testing, and system testing. For software, this typically involves installing a software build release on the operational platforms at the TMC(s), on servers at field locations such as communication hubs, and in field devices such as traffic controllers (as firmware embedded on computer processor or memory chips), and conducting testing to exercise and test the hardware/software interfaces and verify the operational functionality in accordance with the requirements.

System acceptance is typically accomplished in stages - hardware/software integration acceptance tests, subsystem acceptance tests, and finally system acceptance tests. Following system acceptance, regression testing is preformed for each new software release or the addition of new hardware components to assure that prior system performance and functionality has not been adversely affected by the new or modified code.

7.5 Other Considerations for a Software Test Program

The preceding sections described a complete software test program for the three general categories of software: COTS (including ITS standard products), modified standard products, and new or custom software. For most TMS projects, a considerable amount of new or custom software will be required to meet the agency's total set of requirements. Even if standard ITS software can be acquired from others, new test procedures will be required to accept this software and integrate with software that will be specifically designed, developed, and implemented for your TMS project. In either case, the test program needs to be developed in concert with developing the procurement specifications. The agency should consider the following recommendations when developing the test program.

7.5.1 Who Develops the Test Procedures

The software developer should develop the test procedures for unit and software build integration testing with the agency's right to approve. If a system integrator is involved in the TMS project, the system integrator should be required to develop an overall system test plan, test schedule, and all hardware/software integration, subsystem-, and system-level test procedures. The agency has the responsibility for final approval over those plans and therefore must carefully evaluate them to ensure that all of their operational requirements are included in the test procedure. The system integrator should also be required to develop regression test procedures (see section 4.4.8) and update them as system deployment progresses. Following system acceptance, the maintenance contractor should be required to update the regression test procedures as appropriate for system maintenance, enhancements, and expansion.

7.5.2 Cost of Testing Program

Because much of the software for the TMS will be new or custom, the software test program will be extensive and expensive. Depending on the robustness of the software development environment and its proximity to the TMC site, some costs associated with agency or integration contractor testing can be mitigated. To keep cost down, the agency should consider requiring (in the procurement specifications) that all COTS and ITS standard product software (including documentation) be sent directly to the software development facility rather than have the agency receive and then trans-ship it to the developer or integration contractor. The agency should also consider having hardware vendors "drop ship"35 one or two factory acceptance tested products to the development site to allow for hardware/software integration testing. This is particularly useful, and cost effective, when elements of the final system can be utilized at least temporarily in a development environment for development and integration testing. Items that are no longer needed for the development environment can be shipped to the installation site or placed in the spares inventory. However, as previously discussed (see section 7.2.3.1), the development environment must be sustained even after final system acceptance for the entire life of the TMS.

7.5.3 Test Schedule

The procurement specifications should outline how the software test program fits into the overall project schedule. The integration contractor should be required to develop a detailed development and test schedule with the TMS test plan. This schedule must allow sufficient time for the agency to review and approve preliminary and detailed software designs and plans for setting up and operating the software development environment as well as reviewing and approving test procedures and witnessing acceptance tests. Because much of the software will be a new or custom design, it may take a year or more to develop. However, early deliveries of representative computer platforms and other system hardware elements (e.g., communication equipment and field devices that have passed factory acceptance testing) to the software development site can improve the overall development and testing aspects of the project.

Test planning and schedules must allow for test procedure review, correction and subsequent approval, occasional re-testing after test failures, and rescheduling for unforeseen events such as delays in hardware shipments, weather-related delays, and the unavailability of key test personnel. The procurement specifications must include specific provisions to address the inevitable test failures, re-testing, and consequences with respect to schedule and cost.

7.6 Summary

This chapter has considered the testing requirements for TMS software from design reviews through hardware/software integration testing. It has offered some guidance as to what types of testing should be considered and when, who should develop test procedures, and testing in both the development and operational environments. The need for maintaining a software development environment even after final system acceptance was also stressed.

As with the hardware test program, the software test program is also dependent on the procurement specifications. The procurement specifications must establish the requirements for the contract deliverables and the testing program, specify the consequence of test failure, and identify the schedule and cost impacts to the project.

The majority of the discussion has assumed that the TMS software is new or custom. There are now a number of "standard" ATMS software packages available from a variety of vendors that are likely to be able to provide most if not all of the functionality necessary of a robust TMS. Such software typically includes DMS control (including travel time calculations), traffic monitoring, arterial control, CCTV control, HAR control, incident tracking and management, web interfaces, and center-to-center capabilities. Agencies are encouraged to review the products generally available to determine if and how they might meet their needs.

The selection of such "standard" products does not eliminate the need for extensive software testing, regardless of the track record for the product. The agency needs to work with the supplier to ensure that the system functionality is well defined (requirements) and that the system can be tested to show how it meets those requirements. Examples include data collection accuracy, data errors, map displays, algorithm accuracy, and screen performance, to name a few. It is also important that an acceptance test procedure be developed, possibly by the vendor, that will serve as the basis for acceptance of the system. Again, this is the agency's opportunity to verify the full and complete operation of the system and to verify that it can handle the full load and expansion requirements; such a test should include the maximum number of workstations, intersections, CCTV devices, DMS, etc. It is likely that simulators will be required to support this type of extensive testing.

One final comment: most of today's underlying COTS products such as Windows, Oracle, and others have software flaws; bugs, if you will. Some of these may adversely affect the stability of the TMS applications software. Hence, it is important that system upgrades be handled in a cautions manner as the TMS software will have achieved a track record, most likely on older versions of the COTS products. The rate of update needs to be controlled by the developer to ensure that a previously stable product does not become unstable due to some unforeseen consequence of an operating system upgrade. Controlling such updates is one of the responsibilities of the CCB discussed earlier. The task of the CCB is to assess the likely impact on the operational system when COTS updates are suggested. The CCB can then examine the pros and cons of the upgrade and develop a cautions procedure – perhaps starting with the "test" environment first.


33 Formerly known as Capability Maturity Model (CMM).

34 This technique has been used to "hold" the source code for use by the agency in the event the supplier does not survive or terminates support for the product. What is very important is that the source code must be current – i.e., maintained at the same level as the operational system – and it must include a complete development platform including all libraries, compilers, linkers, loaders, and "build" environments – essentially everything necessary to convert the source code into the executable that is running on the production system.

35 This is delivery to an intermediate shipping address for a limited number of products that would normally be sent to the installation site. Note: where this is done, there must be a methodology for conducting receiving inspections and limited functional testing at the intermediate address (i.e., the development facility) for these items to be accepted. Otherwise, conditional acceptance (for partial payment purposes) may be allowed at the intermediate location with final acceptance at the installation site.

Previous | Next
Office of Operations