Office of Operations
21st Century Operations Using 21st Century Technologies

Improving Transportation Systems Management and Operations – Capability Maturity Model Workshop White Paper – Performance Measurement

3. State of the Practice for the Performance Measurement Dimension

3.1 The Performance Measurement Dimension

Performance Measurement is the means of determining the effectiveness of organizational activity using tools such as measures definition, data acquisition, and measure utilization. It both establishes the framework for conducting performance measurement and applies those tools. Performance Measurement is fundamental to all other capability dimensions in that it identifies how well an organization is delivering operations services and identifies areas that need improvement. Performance measurement for operations encompasses several aspects of mobility, including congestion level and travel time reliability. The capability-level criteria used in the self-assessments for this dimension are shown in Table 3.1.

Table 3.1 shows the criteria of each of the four levels of maturity to achieve that level for the performance measurement dimension.
Empty Cell. Performance Measurement Criteria for Level Achievement
Capability Level 1 Some output-level performance is measured and reported by some jurisdictions
Capability Level 2 Output-level performance measures are used directly for after-action debriefings and improvements; data easily available and “dashboarded”
Capability Level 3 Outcome-level measures identified (networks, modes, impacts) and routinely utilized for objective-based program improvements
Capability Level 4 Output and outcome performance measures reported internally for utilization and externally for accountability and program justification

Among the 23 workshops, the average self-assessed capability for Performance Measures is 1.92, with nine sites at Level 1, 11 sites at Level 2, and three sites at Level 3. Figure 3.1 indicates Performance Measurement assessment relative to the other dimensions.

Figure 3.1 Graph. Performance Measurement Compared to Other Dimensions of Capability

Figure 3.1 is a graph that highlights the performance measurement dimension line.

(Source: Cambridge Systematics, Inc. and Parsons Brinckerhoff.)

The discussion of the state of the practice regarding the Performance Measurement Dimension below is divided into key elements, based on the approach used in the AASHTO Guide to Systems Operations and Management:

  • Measures definition;
  • Data acquisition; and
  • Measures utilization.

The material that follows discusses important observations regarding the current state of play in each key element.

3.2 Measures Definition

  • Policy visibility of performance. Most states/regions are conscious of the impending requirements of MAP‑21, and performance measures are much discussed in professional circles. All locations were at least in the stage of developing operations performance measures and most had started to compile them. With a few exceptions, the measures were limited to output measures, with the vast majority being related to TIM, probably because of the availability of data from incident management logs and the focus on TIM programs and strategies that is emerging across the country. The TIM output measures were relatively consistent in their definitions as they mostly relate to the “incident timeline,” but subtle differences in defining when a specific TIM activity starts and stops were present (e.g., what signifies the end of an incident: all lanes open, all emergency vehicles absent, or return to normal traffic?). Several agencies cited the need for guidance and standardization in performance measure development.
  • Performance measure definition. Lack of performance measure definitions for weather, work zones, and signalized arterials was frequently mentioned as a problem. The most easily accessible performance measures identified in the workshops relate to freeways, probably due to data availability, a longer history of freeway measurement nationwide, and the fact that State DOTs (a key target audience for the workshops) are largely focused on freeway operations. Even with consistent definitions, obtaining data for performance measures in these areas will be a challenge. Performance measures for programs where multiple agencies are involved – such as incident management – is sometimes problematic. DOTs and public safety agencies may hold themselves to different standards regarding the stages in incident management (i.e., how the incident “timeline” is defined), and sometimes how the stages are defined may differ. Ownership of measures also can be a challenge in getting consensus on measure definition, especially when multiple agencies are involved and one agency decides to define specific measures for processes that they do not necessarily “own.” This can be a special problem for State DOTs that are dependent on law enforcement CAD data.
  • Input, output, and outcome measures. The agencies that defined outcome measures reported having reviewed the literature and observed what other agencies were doing, but a single reference for guidance was not used. As with output measures, the need for guidance and standardization of outcome measures was cited by several agencies. A few agencies noted a disconnect between operations units and planning units in terms of performance measures, i.e., different measures are used. At least two agencies identified the need to track assets (“input” performance measures in the literature) in addition to outputs and outcomes. They view the hard assets as critical to providing operations services and need better data to support maintenance and replacement budgeting decisions.
  • Resources for Performance Measurement. Obtaining funding for Performance Measurement is a challenge for some agencies. In some cases, upper management is not convinced of the need for it, and the funding must come from existing budgets. One promising trend observed in several State DOTs (and discussed in others) involved active use of an existing agencywide Performance Measurement office/unit or an intention to establish such a unit in response to the Performance Measurement requirements of MAP‑21. Responsibility for various measures and measure reporting was often allocated to multiple units within an agency, and a unit not connected with TSM&O is often responsible for reporting performance agencywide.

3.3 Data Acquisition

  • Existing data availability. The availability of data for incident management activities varies among agencies. Some TSM&O units collect and “own” TIM data. In other agencies, TSM&O units are dependent on emergency responder CAD systems for TIM data. Freeway detector data also are widely available but not all agencies use them to develop congestion statistics (outcome measures). Work zone data are difficult to obtain. Work zones are usually overseen by other units within the agency (e.g., construction, capital projects) and might not be connected to other operations activities, even during implementation. As a result, they have their own processes. Another issue is that the nature of a work zone is constantly changing (e.g., number of lanes closed) and it is difficult to obtain information about when these changes occur so that they can be correlated with changes in travel-time performance. Even if a documentation process exists, contractors might not report the changes that allow for accurate tracking.
  • Outsourcing. Performance data (e.g., volumes and speeds) from agency owned field equipment is widely available and can be used for developing outcome measures. Maintenance and replacement of the field equipment is expensive, leading some agencies to investigate the use of private vendor travel time data for use in both operations strategies and performance measurement. Several agencies mentioned MAP‑21 as a driving force behind travel-time/speed data acquisition. Private vendor vehicle probe data are becoming more widely available. Many suggested that they were looking into probe data not only to meet MAP‑21 requirements but also to fill in gaps where detectors do not exist. Several agencies have existing contracts with traffic information providers, while others are investigating it, especially in response to meeting MAP‑21 requirements.

3.4 Measures Utilization

  • Internal utilization. Incident management and snow and ice control are the two areas where performance data are used for operational management. The high public visibility of road clearance conditions and operations has led many snow belt States to track and report clearance in real time. Regarding incident management, while many States collect basis incident data (number, type, location, type) and several report clearance times, including externally, only a few make routine use of the data to modify incident management programs. Some agencies conduct after-action review of incidents that are supported by the data, although the reviews drill down deeper into what caused problems and what worked well with the management of the incident. One agency noted there is reluctance to conduct these reviews because of fear that blame will be assigned. Traveler information program performance (e.g., web site hits and VMS messages) also was noted by several areas: usage statistics and trends were monitored, and in some cases influenced operational decisions in terms of system enhancements or upgrades. The development of outcome measures is impeded by limitations on the availability and integration of multisource data. While there are some good examples (RITIS from University of Maryland or RADS from Arizona), many areas struggle to efficiently acquire, integrate, and use multisource data for performance measures.
    Agencies are struggling to decipher how to use performance measures in the decisionmaking process. A common theme among all workshops was that performance measures are not substantially integrated into the decisionmaking process, and that this is likely the most difficult barrier to overcome. Sometimes minor changes in practice are made based on performance measure information – such as increasing service patrols or identifying congested locations for ramp meters – but performance measures are not used at the program level to determine funding levels or emphasis areas. Along the same lines, no agencies have used performance measures to develop a composite picture of congestion – such as the “congestion pie” – that could help guide program and project investment levels holistically. When agencies move beyond TIM performance measures, the requisite data management and analysis tools become more complex to develop and maintain.
  • External reporting. Production of periodic performance reports was the most common use of performance measures, although not all agencies produced them. A few states included TSM&O-related activity measures – largely output data on external (web site) dashboards. Because of data availability and the ease of summarizing them, incident characteristics were by far the most frequent subject of performance reports. Travel time (congestion) reports based on measured data were far more rare, but many states are in the process of developing them. In a few cases, the operations agencies developed congestion reports based on field data from detectors (limited to freeways). In other cases, other units or agencies develop the congestion report, in which case, the uses of the products were not integrated. There was general agreement that the use of private vendor data offers great potential for future congestion reports, as it is not limited to freeways with roadway detectors. Agencies that have produced performance reports are frustrated that the public doesn’t seem to understand them or appreciate how they relate to their experiences, highlighting the need to find more effective ways of communicating performance measures. One possibility may be because performance reporting currently is done from the facility perspective, while travelers experience the system through complete trips. Conducting customer surveys related to the delivery of operations services is rare, and when it is done, it is not done periodically but as a one-time study, or operations services are included as a small part of a broader agency customer relations survey. At least one agency collects customer feedback from service patrol assists.
  • Management accountability. Accountability for TSM&O program performance is in the early stages. Several states have incident clearance targets but conduct reviews only when the target (often 90 minutes) is exceeded. There were no instances described in workshops where DOT units were subject to performance reviews in this regard. A few states report using Performance Measurement on specific major projects such as corridor improvements. These instances can present opportunities, such as to expand a successful work zone performance measurement and reporting initiative (e.g., travel times, safety), in part by leveraging the demonstrated success of the initiative to secure required resources and technical support.
  • Comprehensive performance management program. No agency has achieved a fully integrated Performance Measurement system that links inputs, outputs, outcomes, and targets into a formal TSM&O performance management process. Agency staff are aware of the importance of outcome measures to making the business case for TSM&O to decision makers and the public, but they have made very limited progress in considering the data and analytics related to outcome measures such as travel time, reliability, and safety. Part of the reason is that these outcomes also are affected by other programs, such as capacity expansion, demand management, alternative mode use, and safety countermeasures. It is clear that outcomes must be managed on an cross-jurisdictional basis, but this has not occurred.
  • Outsourcing of outcome measures. Private sector probe data is seen by many states/regions as a way of obtaining useful performance analyses. It appears that the need for progress in this area has inhibited staff from making the business case for TSM&O benefits on either a stand-alone or alternative investment basis. Several states are in the early stages of identifying outcome measures and acquiring probe data to support them. DOTs with extensive toll operations are capitalizing on tags as probes. A number of states and regions recognize the need to focus on Performance Measurement for arterial operations, although data availability is an obstacle.
  • Use of performance measures in business case materials. Only a few agencies have prepared a TSM&O strategic plan that identifies TSM&O goals and objectives and develops performance measures that track progress towards them. Few agencies had any guiding documents of any kind (e.g., operations data business plan, Performance Measurement plan) to guide the development of the Performance Measurement that was in place; most were done with minimal advance planning. Several agencies cited a need for guidance on conducting before/after evaluations of operations projects. Such guidance ideally would include information regarding how to package the results to highlight the benefits of operations to management and the public. Demonstrating benefits of operations seemed to be a motivation for several agencies to undertake Performance Measurement in the first place. There is a desire to use consistent measures across all agency functions.
Office of Operations