Department of Transportation: Federal Highway Administration
Office of Operations home

final report

Lessons Learned: Monitoring Highway Congestion and Reliability Using Archived Traffic Detector Data


3.0 Lessons Learned

Lesson 1: Don't wait for a "silver bullet."

All public agencies are required to make decisions based on limited or less-than-ideal information. Performance monitoring and the associated decisions stemming from transportation system performance should be viewed in this context. Some analysts believe that a wide gap remains between a multi-modal, system-wide performance measurement system and the available data to support it. Some agencies may be taking a "wait-and-see" attitude in regards to using archived data from traffic management centers. Or other agencies may be hoping that probe vehicle data from cell phones or vehicle monitoring systems will solve the data gap for performance monitoring. Some agencies may rely only on their data and not trust data collected by another agency. Yet numerous practitioners around the country have been using available data resources to make informed decisions about system performance.

The lesson learned is that transportation agencies should not wait idly for a "silver bullet" dataset or collection technique. More often, change in transportation is evolutionary rather than revolutionary, and agencies may find that what seemed like an ideal data source also has problems. Of course, agencies must become comfortable with available data resources and their features and limitations. In a limited number of instances, available data may be so poor as to not be considered for performance monitoring. Data of such poor quality should be obvious to even the casual observer.

The best practice appears to be using available data resources within an analysis framework that can eventually capture the benefits of improved or ideal data. An example of this practice comes from the Florida Department of Transportation (DOT). In their mobility performance measures program, the Florida DOT has designated the reliability of highway travel as a key mobility measure for their State highway system2. Ideally, a travel reliability measure would be formulated from a continuous (e.g., 24 hours a day, 365 days per year) data collection program over all highways. However, like most states, the Florida DOT does not have such a continuous data collection program, even in their major cities. Instead, they are planning to collect data for their reliability measure through a combination of archived data and additional floating car data collection.

Another advantage of embarking on a performance monitoring program even without the ideal data set is that agencies grow accustomed to reporting and using measures in their day-to-day management activities and decision-making. These functions are ultimately what performance measurement should be achieving. By starting now, agencies learn how to best use performance measure for their own uses.

Lesson 2: Travel time modeling and estimation techniques will always be necessary.

Many performance monitoring programs rely on speed or travel time-based performance measures. As such, link travel time data form the basis for performance monitoring as well as numerous other advanced transportation applications (such as traveler information, dynamic routing, etc.). Because link travel time data is not readily available or cheaply collected for most highway links, many performance monitoring programs have relied on speed/travel time modeling and estimation techniques.

Some analysts have suggested or implied that if one cannot directly measure link travel times, then travel time-based performance measures are not feasible3. Other analysts predict a future in which link travel times will be ubiquitous and travel time modeling or estimation will be unnecessary. The inherent nature of a performance-based planning process requires that travel time-based performance measures are estimated for future planning scenarios. The lesson learned is that travel time modeling and estimation techniques will always be necessary (even with widespread availability of collected link travel times), particularly in a performance-based planning process. One of the challenges will be to ensure that estimation techniques produce roughly compatible travel time estimates as those from direct measurement.

Lesson 3: Visualize the data, pictures are cool!

The audience for transportation performance information can include a wide range of transportation practitioners, agency mid- and upper-level managers, elected officials, business leaders, and the media. The lesson learned is that simple charts and graphics are more easily interpreted by this diverse audience than complex data tables and lengthy text descriptions. Data collectors and analysts may be adept at interpreting complex technical data because that is their primary job function; however, other non-technical audiences may only be able to devote 30 to 60 seconds to understanding key report elements.

Several practitioners have mentioned the "spouse test," in which they asked their spouse (who has a non-technical background) to review and interpret certain graphics or charts that illustrated transportation performance. Another "rule-of-thumb" comes from Mark Hallenbeck of the Washington State Transportation Center, who has remarked that every research project or activity should be summarized with a single page of text and a picture/graphic. Other practitioners have described themselves as disciples of Edward Tufte, who has written several award-winning books on displaying technical information in meaningful ways4.

The Washington State DOT is an agency that firmly believes in the power of graphical illustrations in performance monitoring. The DOT publishes an agency performance report every three months that is full of charts that demonstrate their progress on various goals and programs5. Additionally, Figure 5 contains a graphical illustration that shows improvements from ramp metering in Seattle.

 

Figure 5. Graphic Illustrating Improvements from Ramp Metering in Seattle
This combination bar-line chart shows that improvements due to ramp metering in Seattle. The line series shows that flow rates improved by 170 vehicles per hour per lane from 1999 to 2001, while the frequency of congestion bars show that level of service F occurs one day per week less often.

Source: Hallenbeck, M., Data Collection, Archiving and Performance Measures: Why Should Freeway Operations Care?, http://www.nawgits.com/icdn/data_for_freeway_ops.html, accessed Oct. 22. 2004.

Lesson 4: Whatever affects traffic should be part of performance monitoring.

In the past, many public agencies have struggled to collect credible data about transportation system performance. Traffic management centers are beginning to help fill the data gap for performance monitoring; however, many agencies still have inadequate resources to consider collecting data other than speeds or travel times that are directly related to system performance. A few agencies, however, have recognized that there are numerous activities and events (some beyond their agency's control) that affect system performance. Thus, despite their best efforts and significant resource expenditure, some agencies may see a decline in the measured system performance.

The lesson learned is that, to be effective, performance monitoring must also gather information on activities and events that can affect system performance. Examples include:

By tracking these influential factors, performance monitoring programs could better target why performance changes at certain times and what solutions are most appropriate. The data can also be used to demonstrate the benefits of operations strategies.

As an example, the Urban Congestion Reporting (UCR) Program performed by Mitretek for the FHWA reports several mobility and reliability performance measures on a monthly basis. The performance reports include these contributing factors (see Figure 6):

Performance reports in FHWA's Intelligent Transportation Infrastructure Program (ITIP) provide another example of reporting transportation performance in the context of possible explanatory measures. In addition to several mobility and reliability performance measures, the following are included (see Table 4):

 

Figure 6. Monthly Summary from FHWA's Urban Congestion Report
This figure shows national and city-level summaries of traffic congestion and travel reliability. In addition to the traffic conditions for each city, the display also shows contributing factors such as weather and incidents.

Source: Mitretek Systems, Urban Congestion Reporting Program.

 
Table 4. Excerpt from ITIP Performance Report for Providence, Rhode Island
PROVIDENCE - JULY 2004: MONTHLY FREEWAY PERFORMANCE REPORT
Current Performance and Trends
Measures Current Month Average for most recent 3 months Comparison to same 3 months last year
July 2004 May 2004 to Jul 2004 Short-Term Trend May 2003 to Jul 2003 Long-Term Trend
Performance Measures
Travel Time Index 1.11 1.12 -1% n.a. n.a.
Buffer Index 27% 28% -1% n.a. n.a.
% Congested Travel 49% 48% +1% n.a. n.a.
Total Delay (veh-hours) per 1000 VMT 1.48 1.53 -3% n.a. n.a.
Explanatory Measures
Peak Period VMT (000) 48,324 48,471 0% n.a. n.a.
Monthly VMT (000) 173,037 169,241 +2% n.a. n.a.
Bad Weather Days 3 2 +50% n.a. n.a.
Total Incidents 194 215 -10% n.a. n.a.
Data Quality Measures
% complete 93% 95% -2% n.a. n.a.
% valid 95% 95% 0% n.a. n.a.
% of VMT covered 53% 52% +1% n.a. n.a.
% of freeway miles 34% 34% 0% n.a. n.a.

Source: Mobility Technologies, Inc., Intelligent Transportation Infrastructure Program

Lesson 5: Use can improve quality.

A vested interest in data collection is one of the best motivators for quality data. Poor data quality can sometimes result when data collectors are physically or institutionally distant from the data users. A common example of this situation is State agencies collecting data to meet Federal reporting requirements. Another example could be a division within a State DOT charged with collecting data of primary interest to another division or department within the State DOT. A vested interest occurs when the data collectors are also data users or are directly affected by decisions made with the data they collect. To the extent that this can be created, data quality will improve when.

There are numerous instances in which a data user has dealt with poor quality data collected by another agency or another division within the same agency. The first response is typically notifying the data collectors of such problems, since in many cases the data collectors may not be aware of certain quality problems since they do not use the data. After the quality problems are repeatedly obvious, the next response is typically encouragement or requirement to meet some data quality criteria. This response may yield some improvement, but some agencies may "game" the system or "post-process" data to meet certain quality checks without inherently improving the data collection process.

The lesson learned is that, in these instances, the agency or workgroup collecting data should be encouraged to use the data to improve their own agency functions or decision-making. An example of this practice comes from the Highway Performance Monitoring System (HPMS), in which the State DOTs report various highway and travel data to the Federal Highway Administration. Originally developed in 1978, the HPMS and its reporting requirements were seen by some State DOTs as simply another requirement that did not result in usable data for their own agency. As a result, the quality of HPMS data in its early years suffered in some states. In the 1990s, many State DOTs began to integrate the HPMS data collection into their own data programs, and began to supplement their own agency analyses with data collected for HPMS. The net result has been more scrutiny of the HPMS data by State DOTs, with fewer concerns about data quality.

Another example comes from the use of archived traffic detector data. In some cities, users of the archived data were lamenting its poor quality for their particular application. A typical response was to let the traffic operations center know about the poor data quality. In some instances, this may have resulted in some improvements to quality. However, many traffic operations centers have become more interested in improving detector data quality because they want to use the archived data to perform additional functions within their workgroup (such as performance monitoring, ramp metering, and travel time or traveler information. As more traffic operations centers use archived data for new and more sophisticated applications, greater attention will be paid to data quality. Such applications include the posting of estimated travel times on dynamic message signs and performance monitoring.

Lesson 6: Support for operations can be built with quality archives.

Public agencies typically struggle with their budgets. As a result, many transportation operations divisions have to justify their expenditures on operations and management activities or risk having their budget cut or diverted to other programs. The lesson learned here is that data collected and archived while managing the transportation system can be easily reformulated to demonstrate the benefits of operations and management activities. However, the reuse of operations data for analytical purposes requires at least two things: 1) foresight to develop information systems that support real-time traffic management activities as well as historical analyses; and 2) commitment to collect and maintain quality data that can be used to demonstrate the benefits of operations. Such processes have long been in place in DOTs in the form asset management systems such as those for pavement and bridge management. Operations must compete for resources internally with these interests who are usually better equipped with "hard" information about system conditions and expected benefits. Archived operations data can help to "level the playing field".

Anecdotal evidence from several states points to cases where the value of operations activities was questioned by State legislators or upper-level managers within a State DOT6, 7. For example, an operations manager from Minnesota DOT highlighted the importance of accurate data in maintaining political support for the agency's incident response team. Members of the State legislature were questioning the value of the freeway service patrol, suggesting that the State agency could not afford to change commuters' tires and give away free gas. However, with their archived data, the Minnesota DOT was able to show that the freeway service patrol was the initial detection source for about 20 percent of incidents that blocked State highways. Based on their quick access to this data, the Minnesota DOT was able to respond to the legislators. In fact, the DOT is now looking for innovative ways to expand their freeway service patrol with their new-found support. The DOT manager attributed this to their ability to access and analyze quality archived data.

Lesson 7: The devil is in the details.

Over the past three years, the Mobility Monitoring Program team has gathered archived traffic detector data from more than twenty different agencies. In doing this, they have encountered a wide variety of data management and archiving practices. Some of the data management practices could be described as sloppy and produce misleading or inaccurate data for performance monitoring applications. Some data collection and management practices are simply not documented well enough, leading to confusion or uncertainty during data analysis.

The lesson learned is that the devil is in the details; that is, there are several seemingly minor data management practices that could have significant consequences when using archived data for performance monitoring. Several real examples follow:

Lesson 8: Find and fix the barriers that hinder performance monitoring.

Some of the barriers to the development of archived data systems are similar to those experienced in further developing transportation operations and management functions:

It will be vitally important to identify and remove these and other barriers to performance monitoring.

Lack of funding will always be a potential barrier in public agencies. Generally, though, if a product or idea makes intuitive sense and provides clear benefits, funding becomes less of an issue. In some cases, funding for data archiving systems has not been readily available because no agency was seen as a true beneficiary or data-hungry user. Or there was no application for archived data of such low quality. In a few instances, funding for data archiving systems became a non-issue when the benefits and applications of the archived data were clearly defined and desired by several agencies.

The professional capacity to manage large databases is not universal in the traffic management industry, although a few states and regions have managed to custom-build powerful and user-friendly data warehouses. The problem is more prevalent in smaller areas where the skill sets and staff time are not available to build data warehouses, and limited commercial off-the-shelf products force agencies to pay for custom database solutions. There are numerous lessons to be learned about data warehousing from other industries (like banking, retail sales, etc), as well as from the few good examples of data warehouses already built for traffic detector data.

Uncertainty about data quality has stalled or hindered efforts at using archived data in several cities. In some instances, the data quality may be acceptable but unfamiliarity with the data and how it is collected and managed causes uncertainty. In other cases, uncertainty about data quality is based on bad experiences with trying to salvage poor quality data for certain planning applications. FHWA has initiated several data quality activities that attempt to address this barrier to using archived data systems. For example, FHWA has defined several standardized measures of traffic data quality and are producing guidelines for how to implement data quality assessment procedures8. Two other white papers on traffic data quality were also prepared for regional workshops to discuss data quality issues9.

Lesson 9: Performance monitoring may be a "killer app" for archived data.

Traffic managers are, by the very nature of their work, most interested in real-time data on current traffic conditions and events. The "champions" for formalizing data archiving in the National ITS Architecture in the mid-1990s were mostly planners, researchers, and other data-hungry analysts. In some cases, traffic managers supported minimalist data archives but were seldom data users. In even fewer instances, traffic managers were champions for developing data archives. However, traffic managers may be in the best position to champion and implement data archiving systems: they collect the data, they maintain the equipment, and they are most familiar with data collection devices and protocol. The only thing that seemed to be missing was a tangible benefit or application for traffic managers to assume responsibility for developing and maintaining data archives.

Current trends and anecdotal evidence indicate that more traffic managers have taken an interest in developing and maintaining data archives. There appear to be at least two applications that provide tangible benefits to traffic managers:

Of these two applications, performance monitoring appears to be the most compelling application that is likely to strengthen traffic managers' interest in developing data archiving systems. Short-term traffic forecasting procedures that use historical traffic patterns in their algorithms are another application that could potentially push the need for better data and more functional archives. However, the eventual adoption of such methods by operators is not known at this time.

Lesson 10: Local knowledge contributes to national interpretation.

The FHWA and US DOT are responsible for reporting their performance in meeting agency goals, one of which is related to mobility and highway congestion. The source of data for this performance reporting (as well as most US DOT data programs) is State and local agencies. Thus, FHWA relies on these State and local agencies to provide data in a standard format (such as in the Highway Performance Monitoring System). Alternative data sources for national performance monitoring currently being explored include archived traffic detector data and traveler information data, which are not typically provided in a standard format by State or local agencies. The end result, standard format or not, is that in many cases the gathered data is not sufficient by itself to explain and interpret various trends in system performance. Local knowledge from State or local agencies is often required to interpret trends or better understand changes or relationships in system performance.

The lesson learned is that capturing local knowledge is desirable for interpreting system performance at a national level. State and local agencies are likely to be more familiar with highways in their jurisdiction and significant activities or events that have affected system performance. Some State and local agencies may be monitoring performance using other methods or techniques that could confirm or differ from national congestion monitoring results. Because of their experience with local issues, State and local agency staff may also serve as a "reality check" for data collected in national congestion monitoring. However, this capture of local knowledge is currently, at best, an informal process that involves sporadic communication with State and local agencies. There appears to be a need to formalize a process (perhaps in the form of a Delphi group) that solicits the knowledge and experience (as well as "event" databases) of State and local agencies in national congestion monitoring.

TTI researchers have been gathering this local knowledge for many years through several of their national congestion studies. In the media-friendly Urban Mobility Study, which currently reports congestion statistics for 75 cities using Highway Performance Monitoring System (HPMS) data, TTI researchers regularly contact State DOTs and metropolitan planning organizations to better understand or interpret reported trends in road mileage and travel statistics. This contact has been essential for smoothing year-to-year fluctuations caused by reporting differences or inconsistencies. Similarly, in the Mobility Monitoring Program, which reports congestion statistics for numerous cities using archived data, TTI researchers informally solicit feedback on city-specific reports that contains route-by-route congestion and reliability statistics. In most cases, State or local agencies have confirmed the overall trends reported. However, some State or local agencies occasionally dispute the credibility of the archived traffic data as compared to their local congestion studies or experience. Many times these agencies are not currently using archived data for local performance monitoring. If they were, local knowledge could help improve both data quality and use of the data for national purposes. Therefore, a lesson learned is that local use of archived data for performance monitoring will benefit national efforts and should be promoted.


2 Florida DOT Mobility Performance Measures Program, http://www.dot.state.fl.us/planning/statistics/mobilitymeasures/default.htm, accessed June 1, 2004.

3 For performance monitoring, the key consideration for any method that estimates travel times (rather than measuring them directly) is that the method be internally consistent and not show bias. Therefore, even if the estimated travel times do not match observed ones, monitoring their change will still result in useful trend information. Further, if the direction and size of the error is known, adjustments can be made.

4 For more information, see http://www.edwardtufte.com/tufte/.

5 Washington State DOT. Measures, Markers, and Mileposts. Accessed at http://www.wsdot.wa.gov/accountability/default.htm, June 2, 2004.

6 Several of the anecdotes were discussed at a USDOT/ITS America Data Quality Workshop in April 2004, summary available at http://www.nawgits.com/icdn/dq_workshop.html, accessed June 2, 2004.

7 Hallenbeck, M., Data Collection, Archiving and Performance Measures: Why Should Freeway Operations Care? http://www.nawgits.com/icdn/data_for_freeway_ops.html, accessed Oct. 22. 2004.

8 Defining and Measuring Traffic Data Quality, EDL # 13767, available at http://www.its.dot.gov/itsweb/welcome.htm. Guidelines for traffic data quality programs will be available from FHWA in Fall 2004.

9 Advances in Traffic Data Collection and Management, EDL # 13766, and State of the Practice for Traffic Data Quality, EDL # 13768, both available at http://www.its.dot.gov/itsweb/welcome.htm.


Back to Top

Previous |Table of Contents | Next

Office of Operations