Office of Operations
21st Century Operations Using 21st Century Technologies

Applying Archived Operations Data in Transportation Planning: A Primer

5. PLANNING OPPORTUNITIES FOR ARCHIVED OPERATIONS DATA - BASIC TO INNOVATIVE

This chapter showcases five important planning activities that are enabled by archived operations data:

  • Identifying and confirming the problem.
  • Developing and reporting mobility performance measures.
  • Using archived operations data for loading into and calibrating reliability prediction tools.
  • Performing before and after studies to assess projects and program impacts.
  • Identifying causes of congestion.

The sections below introduce each planning activity by identifying the need for archived operations data for this activity, the types of relevant data, analytical procedures that transform the archived data into useful information, and the impact that using archived operations data for that planning activity can achieve. Following the introduction to each planning activity, one or more case studies are provided to demonstrate, in detail, how archived operations data can be used and what agency outcomes were achieved. Each case study highlights the use of visualization and its influence on the planning activity.

Problem Identification and Confirmation

Need

A long line of stopped cars in advance of a freeway exit.
Figure 18. Photo. Congestion on freeway.
Source: Thinkstock/tomwachs.

All agencies have limited dollars available to improve mobility, safety, and aging infrastructure; therefore, most agencies attempt to identify the "worst" problems in their jurisdiction (i.e., problems that have the greatest impact, and most likely, the greatest potential for a high return on investment (ROI) should countermeasures be put into place). An agency needs an efficient way to identify, quantify, and rank safety and mobility issues, and to determine the cause of these issues. Because of the dynamic nature of transportation, it is important that an agency be able to perform these types of analysis efficiently and cost-effectively many times throughout a year to respond to external political influences and/or to public concern. The results of these analyses can then drive the long-range transportation plan (LRTP), influence funding decisions, or simply be used to respond to public concerns over why a particular project was chosen over another.

Relevant Data

Historically, continuously collected permanent count station data and police accident reports would have been influential in this type of analysis. In some cases, temporary count station data may have been used, or density data may have been collected by aerial photography surveys. Both the count station and police accident data, while collected in real-time, will often have many months of lag between when the data is collected and when it becomes available to analysts.

Aerial photography surveys may be collected only for a few hours once every few years, and only at select locations where problems have been anecdotally noted. Temporary count stations are deployed only once every few years on select roadways, and they rarely collect data for more than 1 week. Because these data sources are not collected continuously, agencies must rely on modeling and forecasting techniques, which may not account for seasonal variations, incidents, or other dynamic conditions. Archived operations data, however, can help to reduce or even eliminate some of the challenges associated with the above data sources. Real-time flow data can be collected from sensors, toll tag readers, license plate readers, or probe data from third-party vendors (e.g., HERE, INRIX, or TomTom). Advanced transportation management system (ATMS) platforms also collect real-time event data and lane closure information from construction events, disabled vehicles, accidents, and special events.

Analytical Procedures

Operations data from probes or other high-density sensors can be used to identify congested areas or bottlenecks. When this data is integrated with ATMS event data, it becomes possible to identify if the congestion is recurring or non-recurring.

Flow data from probes and dense sensor networks can sometimes be challenging to work with because of the sheer volume of data, complex linear referencing systems that might not match up with agency centerline road databases, and limited agency technical and resource capacity. Agencies with strong, integrated information technology (IT) departments, dedicated data analysts, sufficient data storage hardware, and access to statistical software, such as SAS (Statistical Analysis System), can handle this type of analysis given time and proper funding.

However, many agencies subcontract their analysis to consultants, universities, or transportation departments, or they purchase off-the-shelf congestion analytics software and data integration services tailored for transportation planners.

Impact

Regardless of the method used to analyze operations data, agencies often find that the use of operations data can allow them to identify problems based on more timely data, which helps to instill greater confidence in the decisions that are made as a result. Also, when probe-based data is used for the analysis, the data has greater coverage than traditional loop detectors. Therefore, there is more confidence that problem identification analysis is covering "all roads" instead of just detected roads. Lastly, agencies who purchase off-the-shelf congestion analytics software find that the speed of analysis increases so significantly that resources are freed up and the agency is able to respond to requests much faster.

Case Study: Problem Identification and Confirmation at the New Jersey Department of Transportation

Overview

The New Jersey Department of Transportation (NJDOT) uses a combination of archived operations data and data processing and visualization tools to identify performance issues on the transportation system and develop easy-to-understand performance measures with visualization that speak to senior leadership, elected officials, and the public, in alignment with Fixing America's Surface Transportation (FAST) Act and organizational goals. NJDOT is leveraging a mixture of archived operations probe data, volume data, weather data, and event data to help identify congestion issues on the State's roadway network in part to create problem statements for consideration in its project delivery process.

The archive of these data sets has been made available to NJDOT planners through a web-based visual analytics tool known as the Vehicle Probe Project (VPP) suite. This suite gives planners the ability, with minimal effort, to automatically detect and rank the worst bottleneck locations in a county or entire state, determine if the congestion is recurring or non-recurring, determine causality, measure the economic impact, and produce graphics that can be inserted into analysis documents and presentation materials to be used with decision makers. The VPP suite is an outcome of a project directed by the I-95 Corridor Coalition that began in 2008 with the primary goal of providing Coalition members with the ability to acquire reliable travel time and speed data for their roadways without the need for sensors and other hardware. The VPP has evolved since that time and now also provides tools to support the use of the data and integrates several vehicle probe data sources.22

Data

NJDOT is using four archived data sets: probe, volume, event, and weather. The probe data is derived from a private-sector data provider and is archived at 1-minute intervals from a real-time feed that is originally used for travel time and queue estimation. The probe data is represented as the average speed across the length of a road segment for each 1-minute period.

The volume data used by the agency is derived from Highway Performance Management System (HPMS) data that is also provided back to the agency through a third-party data provider.

The event data derives from NJDOT's OpenReach system, which is their ATMS and 511 platform. Tis data is entered by operations personnel into their ATMS platform in real-time. Types of information recorded include the type of event (e.g., construction, accident, special event, disabled vehicle, police activity, etc.), location, and lane closure information.

The weather data is a mixture of both weather alerts and radar that derives from freely available National Weather Service data feeds.

All of the above-mentioned data sources are fed into the Regional Integrated Transportation Information System (RITIS) archive and the VPP suite of analytics tools housed outside of the agency. RITIS also began as a project under the I-95 Corridor Coalition and is an automated data fusion and dissemination system with three main components, including: (1) real-time data feeds, (2) real-time situational awareness tools, and (3) archived data analysis tools.

Agency Process

NJDOT identifies and analyzes congestion issues on the system and develops communication materials using the following basic process with the VPP suite of analytics tools:

  1. Rank the worst congested (Figure 19) locations within their state by:
    1. Road class.
    2. Time of day.
    3. Day of week.
    4. Date range.
    5. Geography (Statewide, by county, by zip code, by corridor, or by user-defined regions).
  2. Determine if the congestion is recurring or non-recurring (Figure 20).
  3. If non-recurring, identify causality through event data analysis (Figure 21), including:
    1. Weather events.
    2. Accidents.
    3. Construction.
  4. For each congested location or corridor, study the economic impact of that congestion through user delay cost analysis (Figures 22 and 23).
  5. Develop compelling reports and graphics that are presented to high-level decision makers to communicate the impact of the problem in a meaningful way (Figures 24 - 27).
Screen capture of the Bottleneck Ranking Tool from the Probe Data Analytics Suite developed by the University of Maryland Center for Advanced Transportation Technology Laboratory. There are three tabs open in the main window displaying different types of information.
Figure 19. Screenshot. Ranked bottleneck locations for all of New Jersey during March 2014.
Source: Bottleneck Ranking Tool from the Probe Data Analytics Suite. University of Maryland CATT Laboratory.

The bottleneck ranking tool allows an agency to "rank" the worst congested locations in any geographic region (multi-state, state, county(ies), city, etc.) based on archived data (e.g., probe-based speed data, weather data, and ATMS incident logs), which informs users on certain statistics (e.g., on how significant each congested location is, when the congestion occurs, how it may have been affected by incidents and events, etc.).

The graph in Figure 20 is one of the ways NJDOT views information on bottlenecks using the VPP suite. This graph shows the Monday through Friday pattern of congestion in the mornings (6:30AM through 10:30AM) with little to no congestion on the weekends. The spiral graphs are useful in identifying temporal patterns in the data. The repeated patterns that temporal data exhibits over longer time spans are often more illuminating than the linearity of the data and can be used in tandem with event and weather data to determine if the bottleneck location is recurring or non-recurring. The spiral graph renders data along a temporal axis that spirals outward at regular, user-defined intervals (e.g., days, weeks, or months). Bottleneck events are rendered as bands along the axis, creating visual clusters among data that contribute to patterns. The particular bottleneck shown in Figure 20 is rendered on a daily cycle—each revolution in the graph is a single day where the inner-most revolution is March 1, and the outermost revolution is March 31. Bottlenecks at this location are more prevalent in the morning between the hours of 6:30AM and 10AM, with a few less-severe bottlenecks occurring in the evening between 5PM and 6PM. The color denotes the maximum length of the bottleneck. The 5-day work week pattern also is shown in this bottleneck.

Screen capture of the Bottleneck Ranking Tool from the Probe Data Analytics Suite developed by the University of Maryland CATT Laboratory. The chart represents one of the ways NJDOT views information on bottlenecks using the VPP suite.
Figure 20. Chart. Time spiral showing when a bottleneck occurred and length during that time.
Source: Bottleneck Ranking Tool from the Probe Data Analytics Suite. University of Maryland CATT Laboratory.

The graph in Figure 21 is another way that identifies congestion issues based on archived data. The graphic depicts a 20-mile section of I-95 in the State of Maryland in which the right side represents northbound traffic while the left side represents southbound traffic. Green shading denotes uncongested conditions, while red and orange shadings represent more congested conditions. Daily occurrences of incidents and events are overlayed. Figure 21 shows a large number of events that contributed to abnormal (non-recurring) congestion in the northbound direction of travel during the morning.

Screenshot of the Bottleneck Ranking Tool from the Probe Data Analytics Suite. The tab shown in the screenshot is labeled "Congestion Scan - Using INRIX data."
Figure 21. Graph. Congestion scan graphic for I-95.
Source: Bottleneck Ranking Tool from the Probe Data Analytics Suite. University of Maryland CATT Laboratory.

Figure 22 displays the user delay costs for I-295 in New Jersey that contribute to an economic impact study of congestion. Each cell represents 1 hour of the day on a particular day of the week. The cells are color coded to represent hourly user delay in thousands of dollars. Figure 23 provides a closer look at a summary report of delay costs for a Friday at 5pm.


Screen capture shows the User Delay Cost Table from the Probe Data Analytics Suite.
Figure 22. Screenshot. User delay costs for a 17-mile stretch of I-295 in New Jersey (both directions of travel).
Source: User Delay Cost Table from the Probe Data Analytics Suite. University of Maryland CATT Laboratory.

Partial screenshot of the User Delay Cost Table from the Probe Data Analytics Suite. It provides a closer look at a summary report of delay costs for a Friday at 5pm.
Figure 23. Screenshot. View of I-295 showing detailed statistics in a mouse tooltip for a particular hour of the day.
Source: Probe Data Analytics Suite. University of Maryland CATT Laboratory.

The following figures are examples of reports and facts sheets that NJDOT developed based on archived operations data to communicate to decision makers and the public about the problems on the system and the need for improvements.

Screenshots of graphics taken from the Vehicle Probe Project (VPP) Suite Bottleneck Dashboard and the Probe Data Analytics Suite. The particular presentation shown here is labeled "US Route 1/County 571 Signalized Intersection - Bottleneck Analysis." NJDOT gives such presentations to high-level decision makers to communicate the impact of the problem in a meaningful way.
Figure 24. Screenshot. Project confirmation presentation graphic to confirm a "high-need" signalized intersection in New Jersey.
Source: New Jersey Department of Transportation.

Screenshot depicts the Bottleneck & Congestion Scan Analysis for I-280 (from MP 14.7 to 15.9) Harrison Town, Hudson County using the VPP Suite. The figure includes screenshots taken from the Congestion Scan tool. With the help of the data analysis, an Assessment summary is provided.
Figure 25. Screenshot. Example of problem identification and project confirmation at New Jersey Department of Transportation.
Source: New Jersey Department of Transportation.

Composite illustration depicts different types of data taken from the VPP Suite
Figure 26. Screenshot. Example analysis from New Jersey Department of Transportation using the Vehicle Probe Project Suite, 511 New Jersey cameras, and the New Jersey Consortium for Middle Schools.
Source: New Jersey Department of Transportation

The evaluation in Figure 26 of the McClellan Street Interchange is part of the Port Authority of New York and New Jersey"s Southern Access Roadway Project. This location is not considered a high priority for the NJDOT from a congestion relief perspective.

Tool

The VPP suite allows the user to generate a congestion impact factor, which is a simple multiplication of the average duration, average maximum length, and the number of occurrences. The impact factor helps the user filter out high occurrence—but short duration and short length—bottlenecks that are of lower significance (e.g., those that might be a result of normal stoppages as a result of traffic signals on arterials).

Once NJDOT identifies a candidate location for congestion reduction, it can conduct an economic analysis within the VPP suite. NJDOT can select a study region and use existing statistics on the regional value of time for both commercial and passenger vehicles. The suite produces a series of interactive tables that show hourly, daily, and total:

  • User delay cost:
    • Total.
    • Per vehicle.
    • Per person.
  • Hours of delay:
    • Person-hours.
    • Vehicle-hours.
    • Per vehicle.
  • Volume:
    • Total.
    • Passenger.
    • Commercial.
  • Data validity metrics.
A concept graphic containing several visual elements, including a map of the Interstate 80 corridor showing the locations where bottlenecks commonly occur, a map showing locations where different types of traffic events take place, an itemized table of intersection locations with information relevant to traffic congestion, and a screenshot of a map shows the average speed across different locations on a highway.
Figure 27. Screenshot. Concept graphics explaining where high accident locations contribute to bottlenecks.
Source: New Jersey Department of Transportation.

After conducting their analysis within the VPP suite, NJDOT planners integrate some of the supporting graphics into slide shows, pamphlets, and posters that are used to help communicate with decision makers and the public. The embedded graphics are supported by explanatory text, call-out boxes, and other materials that combine to display all the information about the nature of the identified problem. Figures 24-27 are examples of some of these problem identification slides and pamphlets. It should be noted that many of these examples are taken from larger slide decks and other materials that, when combined, provide further background and backup materials to help explain the nature of a problem, and in some cases, recommended countermeasures.

Calculations Behind the Visualizations

The calculations behind all of the performance measures within the VPP suite are posted online.23 It is important to note that, as knowledge improves and technical capabilities mature, these calculations will frequently be updated and revamped within the suite to reflect the state of the art. One of the many advantages of using a web-based analytics tool is that the enhancements made to these calculations propagate instantaneously to every user, thus NJDOT and the relevant metropolitan planning organizations (MPOs) benefit from the new knowledge, and analysts are not burdened with having to re-implement complex functions multiple times within an organization. Another advantage is that these calculations are instantaneously and effectively standardized across multiple agencies and multiple departments, which makes it much easier to compare performance and ensure consistent and reliable analysis from state to state.

Impact and Lessons Learned

NJDOT has, within only a few short years, significantly increased the abilities of their own planning staff, their consultant partners, researchers, MPOs through the adoption of these archived data analytics tools. They are saving time and money while providing much-needed insights into mobility and the problem of congestion.

"Archived operations data may be the most important advance in data for planning purposes."

John Allen, Formerly of the NJDOT Planning Office

With the advancements of these archived data tools, NJDOT can now answer difficult questions that were either previously unanswerable, or could have taken up to 1 year to analyze and report. As a result of this tool, NJDOT is able to operate more efficiently, respond to media and public inquiries rapidly, use a data-driven approach to problem solving, and focus more time and effort in identifying solutions to complex transportation problems, rather than chasing down other data.

The improved analysis tools provide NJDOT with a stronger basis for the recommendations, and enable staff to bring a fresh perspective to previous studies, as appropriate. The VPP suite also is being used to evaluate projects in concept development to help decide whether projects should continue to advance through NJDOT's Project Delivery Process. This type of evaluation is used primarily for projects that were initiated outside of the normal problem statement or management systems screening processes and can be particularly helpful in ranking the priority of projects throughout a region so that political influences do not overly impact a project's likelihood of moving forward.

The greatest lesson learned from NJDOT's experience with archived operations data is that data alone will not solve its problems. It needs tools that can empower a wide range of planners and analysts by removing the stress and burden of managing massive data sets and relying on external processing power to make them more efficient and proactive analysts.

Development and Reporting of Mobility Performance Measures

Need

Performance measurement for mobility concerns is a key aspect of planning activities. It has been defined as a major step in the congestion management process (CMP) and is the basis for conducting sound performance management. In addition to fulfilling these requirements, undertaking performance measurement improves the quality of planning in several ways:

  • Provides accountability for investment decisions. Performance measures assist transportation professionals in gauging what benefit was gained for the cost. As such, they enable the investment process to be transparent.
  • Demonstrates that planners understand where the problems are. Instead of relying on partial information from models, planners now can identify exactly where problems exist. Therefore, performance measurement can be an effective tool for public relations; it is the first step in developing solutions to problems.
  • Helps to guide investment decisions and to understand the impacts of those decisions on travelers. As with the private sector, performance measures help agencies and firms improve their products and labor by continuously monitoring impacts of policies and investments.

Performance measures should not be undertaken in isolation. Rather, they should be part of a process that relates to the strategic vision an agency has. That vision is embodied in goals and objectives and in performance measures that are used to monitor progress toward meeting them (Figure 28). Performance measures also should permeate all aspects of the planning process and planning products (Figure 29).24 A high degree of commonality in measures should exist across planning stages, recognizing some measures will be unique to specific stages. To the degree possible, however, consistency in performance measures across the different stages should be maintained. Documentation of past performance, through the production of performance reports, is the focus of this section.

A matrix in which goals, objectives, performance measures, and performance targets are presented in the context of safety and mobility.
Figure 28. Diagram. Performance measures provide a quantifiable means of implementing goals and objectives from the transportation planning process.
Source: Transportation Research Board, National Cooperative Highway Research Program.


Diagram illustrates activities and applications that take place during planning along the entire time horizon, including past performance assessments, which are based on trend analysis; real-time applications, which are based on current versus historical performance; short-range plans, which incorporate evaluation of project alternatives; mid-range plans, which incorporate preliminary design and TIP estimates; and, finally, long-range plans, which are based on the system-wide performance of alternative plans.
Figure 29. Diagram. Performance measures should be carried across planning applications throughout the entire time horizon.
Source: Transportation Research Board, National Cooperative Highway Research Program.

Archived data enables performance measures to track trends in system performance and to identify deficiencies and needs. Performance measures can be classified into different types:

  • Outcome measures relate to how well the firm or agency is meeting its mission and stated goals. Outcome measures are sometimes called "efficiency" measures. In the private sector, outcome measures relate to the "bottom-line" (i.e., the financial viability of the firm: profit and revenue). For transportation agencies, outcomes are more related to the nature and extent of the services provided to transportation users.
  • Output measures relate to the levels of effort expended, or the scale or scope of actions. Output measures are often called "activity measures" because they relate to specific activities undertaken by agencies.
  • Input measures relate to the physical quantities of items used to undertake activities.

Figure 30 shows an example of a "program logic model" adapted from the general performance measurement literature for incident management. The model breaks down the three above categories (outcome, output, and input measures) into additional categories. "External influences" relate to the forces that create changes in demand that are outside the control of transportation agencies (i.e., changes in economic activity and demographic and migration trends). The main point of Figure 30 is that a comprehensive performance management program includes performance measurement at all levels and that a clear linkage exists between successively higher levels. That is, a change in a level leads to a change in the higher levels.

Performance measures for both congestion and reliability should be based on measurements of travel time. Travel times are easily understood by practitioners and the public, and are applicable to both the user and facility perspectives of performance.

Reliability is a key aspect of mobility that should be measured. There are two widely held ways that reliability can be defined. Each is valid and leads to a set of reliability performance measures that capture the nature of travel time reliability. Reliability can be defined as:

  • The variability in travel times that occurs on a facility or for a trip over the course of time.
  • The number of times (trips) that either "fail" or "succeed" in accordance with a predetermined performance standard or schedule.

In both cases, reliability (more appropriately, unreliability) is caused by the interaction of the factors that influence travel times: fluctuations in demand (which may be due to daily or seasonal variation, or by special events), traffic control devices, traffic incidents, inclement weather, work zones, and physical capacity (based on prevailing geometrics and traffic patterns). These factors produce travel times that are different from day-to-day for the same trip.

Diagram shows an example of a "program logic model" adapted from the general performance measurement literature for incident management. The model breaks down the three categories (outcome, output, and input measures) into additional categories.
Figure 30. Diagram. Program logic model adapted for incident management.
Source: Federal Highway Administration, Operations Performance Measurement Workshop Presentation.

From a measurement perspective, travel time reliability is quantified from the distribution of travel times, for a given facility/trip and time period (e.g., weekday peak period), that occurs over a significant span of time; 1 year is generally long enough to capture nearly all of the variability caused by disruptions. Figure 31 shows an actual travel time distribution derived from roadway detector data, and how it can be used to define reliability metrics.25 The shape of the distribution in Figure 31 is typical of what is found on congested freeways; it is skewed toward higher travel times. The skew is reflective of the impacts of disruptions, such as incidents, weather, work zones, and high demand on traffic flow. Therefore, most of the useful metrics for reliability are focused on the right half of the distribution; this is the region of interest for reliability. Note that a number of metrics are expressed relative to the free-flow travel time, travel time under low traffic-flow conditions, which becomes the benchmark for any reliability analysis. The degree of unreliability then becomes a relative comparison to the free-flow travel time.26 Table 7 shows a suite of reliability metrics. All of these are based on the travel time distribution.27

Chart depicts travel time distribution derived from roadway detector data and shows how this data can be used to define reliability metrics. The shape of the distribution curve is skewed toward higher travel times and is reflective of the impacts of disruptions, such as incidents, weather, work zones, and high demand on traffic flow.
Figure 31. Graph. Travel time distribution is the basis for defining reliability metrics.28
Source: Transportation Research Board, Strategic Highway Research Program 2, Report S2-L08-RW-1.

Table 7. Recommended set of reliability performance measures from Strategic Highway Research Program 2 Project L08.29
Reliability Performance Measure Definition
Core Measures
Planning time index (PTI) 95th percentile travel time index (TTI) (95th percentile travel time divided by the free-flow travel time).
80th percentile TTI 80th percentile TTI (80th percentile travel time divided by the free-flow travel time).
Semi-standard deviation The standard deviation of travel time pegged to the free flow rather than the mean travel time.
Failure/On-time measures Percent of trips with space mean speed less than 50 miles per hour (mph), 45 mph, and 30 mph.
Supplemental Measures
Standard deviation Usual statistical definition.
Misery index (modified) The average of the highest 5 percent of travel times divided by the free-flow travel time.

Relevant Data

Prior to the appearance of archived operations data, congestion performance report production by planning agencies was rare. Where they existed, they were based on results from travel demand forecasting (TDF) models or small sample "floating car" travel time runs. The measurement of reliability was non-existent.

The data to accomplish performance reporting starts with field measurements of travel times or speeds from intelligent transportation system (ITS) roadway detectors or vehicle probes. At a basic level, this data can be used to produce mobility performance measures. However, more sophisticated and in-depth measures can be constructed with the integration of other data:

  • Traffic volumes (volumes are paired with speed measurements when ITS detectors are used but there are no companion volume measurements from the current generation of vehicle probe data).
  • Incident data from incident management logs (includes the nature, severity, duration, and blockage characteristics of incidents).
  • Work zone data (similar to incident data).
  • Weather data.

Analytical Procedures

Data preparation. Routine quality control (QC) procedures need to be applied to the data. These are documented elsewhere.30 Travel time data is extremely voluminous and require special software to be processed. Incident data is much more manageable and can be analyzed with spreadsheets.

Calculation methods. Detailed guidance on processing the raw travel time data into measurements, including QC, may be found in the literature.31 In their simplest form, the steps are as follows:

  1. Compute travel time for individual links (if not already provided). Compute these by the time periods of interest (e.g., PM peak period).
  2. Compute volume-weighted moments of the travel time distribution for each link (e.g., mean and various percentiles). Each observation in the distribution should be an individual travel time record. This assumes that volumes are available at the same temporal and spatial level as the travel time measurements.32 If volumes are not available, then a straight average is used. Note that an "observation" could in fact be an average value, for example, the average travel time for a 5-minute period. In such cases, some variability has been lost due the aggregation; if travel times from individual vehicles were used, the resulting reliability metrics would indicate higher variability.
  3. Compute additional metrics, such as the travel time indices, at the link level.
  4. Compute performance measures at aggregated spatial levels, such as facility and area wide, by taking the volume-weighted average of the metrics previously computed.

If results are desired at the facility level, a more statistically sound procedure is to compute the travel time over the entire facility by summing up the individual link travel times from Step 1 and using the aggregated travel times as the basis for travel time distribution. In this case, the weights become sum of the volumes for the links that comprise the facility. In both cases, the results achieved by volume-weighting will be different than not weighting. The difference will be more pronounced if both low volume and high volume periods are used in the calculation.

The calculation of certain performance measures that are defined by exposure requires special treatment. For example, delay is the number of vehicles or persons exposed to the delay time. The most accurate way of computing these measures is from paired volume-travel time measurements, i.e., measurements that were collected at the same point in time and space. The current generation of privately-supplied vehicle probe data does not contain volume estimates—only speed or travel time. Computing exposure-based measures from this data requires using volume data from another source, namely, average traffic counts. Users should be aware that mixing average traffic counts with vehicle probe data produces only crude estimates for exposure-based measures.

The analysis of incident, work zone, and weather characteristics data is much more straightforward; simple summaries usually suffice. Integrating incident and work zone data with travel time data is more problematic. First, the data must be matched temporally and spatially. The more difficult task is to assign congestion to a specific incident, work zone, or weather event. On facilities where recurring congestion rarely occurs, it is safe to assume that any congestion that occurs is due to an event. Where recurring congestion is routine, the assignment becomes much more difficult because some congestion would have occurred in the absence of the event.

Display of results. A variety of methods are used to display performance measures, including the display of trends:

  • Figure 32 shows one example of how to combine travel time and event data into a single graphic for a corridor. This analysis shows the dramatically negative effect that disruptions can have on travel times when they occur.33
  • Figure 33 shows how the Metropolitan Washington Council of Governments (MWCOG) depicts travel time reliability trends over time. This information is part of MWCOG's program to communicate problems to stakeholders and to be accountable for investment decisions. The data source for this analysis is vehicle probe data.34
  • Figure 34 shows how the Georgia Department of Transportation (GDOT) displays trends in incident service times. This information is used to monitor the effectiveness of new incident management policies and practices. The data was collected by GDOT as part of their incident management system.
Graph shows one example of how to combine travel time and event data into a single graphic for a corridor. Frequency is plotted on the vertical axis while Travel Time is plotted on the horizontal axis. The trends are compared for conditions including no rain, rain, crash effects, and incident effects. This analysis shows the dramatically negative effect that disruptions can have on travel times when they occur.
Figure 32. Graph. Mean travel times under rain, crash, or non-crash traffic incident conditions for I-5 southbound, North Seattle Corridor, Tuesdays through Thursdays, 2006.35
Source: Transportation Research Board, Strategic Highway Research Program 2, Report S2-L03-RR-1.

A series of four graphs display how the Metropolitan Washington Council of Governments depicts travel time reliability trends over time. The graphs provide Planning Time Index values across the 12 months of a year. There are four graphs shown in this figure for the years 2010, 2012, 2013, 2014 and 2015: planning time index on all roads, planning time index on the interstate system, planning time index on non-interstate National Highway System, and planning time index on transit-significant roads.
Figure 33. Graph. Reliability trends reported by the Metropolitan Washington Council of Governments.
Source: Pu, Wenjing, Presentation at 5th International Transportation Systems Performance Measurement and Data Conference, June 1-2, 2015, Denver, Colorado.


Two horizontal bar charts show how the Georgia Department of Transportation displays trends in incident service times. In each chart, the vertical axis shows months starting from November 2013 and ending at October 2014. The horizontal axis displays time in the units of Hours: Minutes. Each bar is color-coded into the 3 parts: response time (blue), roadway clearance time (red), and incident clearance time (Green).
Figure 34. Charts. Incident timeline trends reported by the Georgia Department of Transportation for the Atlanta region.
Source: Georgia Department of Transportation, H.E.R.O Monthly Statistics Report, Available at:More information available at: http://www.dot.ga.gov/DS/Travel/HEROs.
  • Figure 35 shows how the Performance Measurement System (PeMS) displays congestion-by-source information for a corridor. PeMS is a data archival and diagnostic tool that is used by many agencies to identify the source of problems and to match countermeasures to them. The travel time data source for this particular analysis was ITS detectors.35
Screenshot shows how the Performance Measurement System (PeMS) displays congestion-by-source information for a corridor. On the top left of the image is a Google Maps snapshot of Caltrans District 12: Orange County, SR-91, which is the area under study. To the right is an area labeled Congestion: Overview that depicts heat maps for each day of the week from Sunday through Saturday for an 18-day period.
Figure 35. Screenshot. Heat map of congestion on a corridor using the Performance Measurement System.
California Department of Transportation, Performance Measurement System

Impact

In addition to demonstrating accountability, providing transparency, and promoting effective public relations, developing performance measures is the starting point for several other agency functions. These include:

  • Target setting - Understanding the current state of the system is crucial to understanding what is achievable in the future. In addition, the same data used to measure current performance can be used as input to forecasting models.
  • Project and program evaluations - Evaluating implemented projects is revealing in several ways. A library of impact factors can be developed for future project planning. Benefits can b highlighted as success stories, and lessons can be learned from evaluations where expected benefits did not materialize.

Case Study: Tracking Performance Measures at Delaware Valley Regional Planning Commission

Overview

The Delaware Valley Regional Planning Commission (DVRPC) is the metropolitan planning organization for the greater Philadelphia region. The region includes five counties in Pennsylvania and four in New Jersey. The most congested freeway miles in the region are along major routes into and out of the City of Philadelphia. DVPRC uses archived operations data to inform a variety of planning efforts, including problem identification and confirmation, calibrating regional and corridor models, conducting before and after studies, and developing and reporting transportation performance measures for the regional congestion management process (CMP) and long-range transportation plan (LRTP).

Data

DVRPC primarily uses the VPP data purchased by the I-95 Corridor Coalition for developing performance measures based on archived operations data. It is currently exploring other vehicle probe-based data sources, including the National Performance Measures Research Data Set (NPMRDS), particularly for tracking and reporting progress toward the system performance measures. However, given DVRPC's concerns with the quality of the NPMRDS data, DVRPC is working with its partners at the Pennsylvania DOT and NJDOT to use the VPP data instead. Incident data from agency-collected traffic management systems also are analyzed.

Agency Process

DVRPC uses archived operations data to track the performance of the regional transportation network, identify and prioritize congested locations, and justify funding the appropriate projects. In the MPO's 2015 CMP, DVRPC used two measures based on archived operations data: travel time index (TTI) and planning time index (PTI). DVRPC selected these measures because it anticipated that they were likely to become part of the FAST Act reporting requirements at the time of the CMP update. DVRPC used VPP data to analyze the TTI and PTI for all limited-access highway and arterial road segments with data coverage at the time of the update. It then used the results of TTI and PTI analysis, along with other performance measures, to identify congested corridors throughout the region. These congested corridors were used as a basis for selecting corridors for investment in ITS infrastructure through the Transportation Operations Master Plan.36 The designation of CMP corridor was also used as an evaluation criterion for projects to be programmed in the LRTP and TIP. In addition, DVRPC has developed several outreach documents based on archived operations data to communicate its approach to mitigating congestion with the general public and elected officials (See Figures 36 and 37 for an example).

Screen capture of the front and back covers of an informational leaflet designed to inform decision-makers; planners, engineers, and agency partners; and the public at large about what they can do to help reduce congestion.
Figure 36. Image. Delaware Valley Regional Planning Commission's outreach document is based on archived data.
Source: Delaware Valley Regional Planning Commission. Available at: http://www.dvrpc.org/reports/NL13011.pdf.

Screen capture of the inside pages of an informational leaflet designed to inform decision-makers; planners, engineers, and agency partners; and the public at large about what they can do to help reduce congestion.
Figure 37. Image. Inside view of Delaware Valley Regional Planning Commission's outreach document.
Source: Delaware Valley Regional Planning Commission. Available at: http://www.dvrpc.org/reports/NL13011.pdf.

In addition to developing and reporting performance measures, DVRPC participated in an evacuation planning effort for the City of Philadelphia. The project involved extracting a microsimulation model of the city center from DVRPC's travel demand model, and then calibrating the model with the VPP's archived travel time data on key arterial, collector, and local roadways in the city. The model was used to analyze how to best evacuate buildings, which routes all modes would likely take, which intersections would cause major bottlenecks, and other factors that may impact efficient evacuation of the city during a major event. DVRPC also was using archived operations data to spot-check calibration of the regional model and has begun an effort to use it more extensively in calibration.

Impact and Lessons Learned

Overall, access to archived operations data has been a significant benefit to planning at DVRPC, especially because the DVRPC staff have access to a user-friendly interface through the VPP Suite. The VPP Suite allows for both quick analysis at the corridor level and more complex regional analyses. The archived operations data enhances the agency's knowledge of key travel attributes, including peak hour travel times and reliability, but DVRPC has had challenges. One challenge has been working with the Traffic Message Channel network on which the archived speed and travel time data is reported and translating data from the Traffic Message Channel network to the base GIS linework for analysis with other data sets.

DVRPC manages eight traffic incident management task forces (IMTFs) throughout the region. Incident data is reported at quarterly meetings of these corridor-based IMTFs. Using data from TMCs, incident durations help the task forces identify which incidents should be discussed in more detail in post incident debriefs. Occasionally, the VPP data is used to show congestion impacts of incidents.

DVRPC, a bi-state MPO, has several hurdles to overcome to report incident data on a regional basis. DVRPC would need to perform the non-standardized geospatial referencing of data, expend significant resources fusing data from two States, and clean up inconsistencies. The reporting of incident data would need to be standardized as well as geo-referenced across all corridors and agencies to support a complete regional picture. For example, incident duration, roadway clearance times, and other data may be entered differently depending on the agency, and NJDOT and Pennsylvania DOT—DVRPC's two primary resources for incident data—do not necessarily capture this data in the same way. DVRPC has considered cross-checking the incident data against police crash databases, although this data lags by 6 months or more and does not contain municipal crash reports. The integration of police CAD data with agency traffic incident data would help create regional incident data.

DVRPC also faces challenges with inconsistent data reporting methods and formats with other types of archived operations data because it covers a bi-state area. Agencies in the region have various capabilities to collect, analyze, and share archived data, as well. The transit agencies in the region are an example of this discrepancy, which makes it difficult to perform an analysis for the entire region.

Another barrier to using archived operations data is the sheer amount of data that needs to be processed. The probe data for the DVRPC region amasses nearly 3.4 billion records per year. Working with this data is extremely difficult without powerful tools and hardware.

Use of Archived Operations Data for Input and Calibration of Reliability Prediction Tools

Need

Including travel time reliability as an aspect of mobility is becoming an important consideration for transportation agencies. Reliability is valued by users as an additional factor in how they experience the transportation system. Transportation improvements affect not only the average congestion experienced by users, but reliability as well. Therefore, adding reliability to the benefit stream emphasizes the positive impact of improvements.

Archived operations data enables the direct measurement of reliability, and many agencies have added reliability as a key performance measure. However, the ability to forecast reliability has lagged behind the ability to measure it. Being able to predict the reliability impacts of improvements, in addition to traditional notions of performance, provides a more complete perspective for planners. It also provides continuity in tracking the performance of programs because the tracking includes measures of reliability.

Fortunately, the Strategic Highway Research Program 2 (SHRP 2) produced several tools for predicting travel time reliability.37, 38 These tools require several inputs that can be developed from archived operations data and are used here to provide specific examples of how archived data can be used in conjunction with advanced modeling procedures. Several other modeling platforms—as well as those likely to emerge in the near future—will require similar levels of inputs.

Relevant Data

Data for input and calibration of reliability prediction tools include travel time (or speed) data, preferably continuously collected; volume data; and incident characteristics data.

Analytical Procedures

Procedures to predict travel time reliability are just now emerging in the profession (as of this writing). There are two basic approaches to predicting reliability: (1) statistical methods that develop predictive equations for reliability as a function of either congestion source factors (e.g., demand, capacity, incident characteristics, and weather conditions) or model-produced values for the average or typical condition, and (2) direct assessment of reliability by creating a travel time distribution that reflects varying conditions from day-to-day.

Impact

Reliability prediction tools are needed by analysts as a way to capture the impacts of operations strategies that are primarily aimed at improving travel time reliability. Many agencies have identified the need to improve reliability, yet have not had the means to predict it as a benefit of transportation improvements. Reliability tools provide a means for evaluating operations projects side-by-side with capacity expansion projects. When applied to planning activities, the tools will help transportation agencies better identify and implement strategies to reduce the variability and uncertainty of travel times for commuters and other travelers as well as the freight industry.

Case Study: Developing Data for Reliability Prediction Tools

Overview

A hypothetical MPO is embarking on an update to its LRTP. It has previously developed a scoring process for ranking potential projects in the plan, and it wants to add travel time reliability as one of the scoring factors. However, the MPO's TDF model only produces average speed estimates, and these estimates are artifacts of the traffic assignment process that are not reflective of actual speeds.

Data

The MPO has ITS detector data (volumes and speeds) on regional freeways available. It also has the NPMRDS travel time data for National Highway System (NHS) roadways in the region. Incident data on freeways is available via the State-run traffic management center.

Agency Process

The TDF model produces outputs by individual network link in a readable file. The time periods are a 2-hour AM peak and a 3-hour PM peak. Outputs include assigned traffic volumes, speeds, and delay for each of the periods.

Specific activities that can be undertaken with archived operations data to improve reliability prediction models include:

  • Customize a volume-delay function for estimating recurring delay. Archived ITS detector data (volumes and speeds) can be used to develop a freeway function. This would involve removing observations that were influenced by incidents, weather, and work zones, then computing the volume to capacity (v/c) ratio for each speed measurement. Alternately, the volume-delay function could be developed using the speeds that include all sources of congestion.
  • Customize the reliability prediction relationship. Archived ITS detector or vehicle probe data is used for this purpose. For a set of study links, the mean travel index is computed over the course of a year along with the reliability metrics of interest. Curve fitting is then performed to derive the relationships.
  • Develop relationships between LRTP project types and the factors considered by the recurring and incident delay relationships, especially for operations strategies. This is done by searching the literature, not by using archived data, but it is a critical step. (Archived data may have been used by the researchers to develop these relationships, however.) The factors that can be influenced are free-flow speed, volume (demand), capacity, incident duration, and incident frequency.

The staff determined that operations strategies would best be handled by creating "bundles" of investments scenarios. These were combinations of strategies at different deployment intensity levels. Benefits and costs for multiple investment scenarios (bundles) were developed for further study. Then, the resulting system performance forecasts from the TDF model and the postprocessor were used as part of a public engagement program for the LRTP, asking citizens to choose their preferred level of investment and corresponding performance outcome.

For example, travel time reliability was a performance measure in the operations bundles of projects. Investing in advanced traffic management systems does not produce a quantifiable benefit when using the TDF models alone, because it only considers demand, capacity, and recurring delay. Focusing instead on reliability produced a quantifiable benefit using the post processor. Those benefits were then paired with cost estimates for the ATMS and other operational improvements in the project scoring process. It also allowed the public and elected officials to make informed investment decisions when finalizing the project list.

Developing Capacity Values (Freeways)

Use of localized capacity values is extremely significant in enabling forecasting models to replicate field conditions. Data from ITS detectors that produces simultaneous measurements of volume and speed can be used in two ways to determine the capacity of roadway segments, especially bottlenecks. First, classic speed-flow plots can be constructed. For this exercise, we assume that the volume and speed measurements have been archived in 5-minute time intervals:

  • Aggregate the 5-minute measurements to 15-minute intervals. This is done for consistency with the Highway Capacity Manual's (HCM's) capacity definition. Take only 15-minute time intervals with all three 5 minute intervals present.
  • For each directional location, sum the volumes and compute the volume-weighted harmonic average speed.
  • Convert 15-minute volumes to volume per hour per lane by first dividing by the number of lanes at the location and then multiplying by four.
  • Plot the results.

Figures 38 through 43 show the results obtained by applying the above procedure to data from I-4 in Orlando, Florida. Several observations can be made based on these plots:

  • The general shape of the plots conforms to those shown in the HCM and produced by other researchers. There is a small amount of "noise" in the data, as shown by points scattered away from the general trend lines. These could be errors in the measurements or indicative of transitional flow, which can vary in its traffic flow parameters.
  • The flow values are in vehicles per hour per lane, not passenger cars per hour per lane (pcphpl), as specified in the HCM. That is, trucks are imbedded in the flow measurements, and an average value for percent of trucks must be used to expand vehicles to passenger cars.
  • The large number of observations grouped around a horizontal line between 60 to 70 mph is indicative of the free-flow speed, although the next section will identify a more direct method for computing it.
  • Capacity values are indicated by the right end "nose" of the curve. Capacity varies significantly throughout this highway section. A more formal analysis appears in Table 8. Selecting the maximum value is not recommended as the capacity value due to possible inaccuracies in the measurements, as can be seen by the spread between the 99.5 percentile and the maximum value. Therefore, a capacity value between the 99.5 percentile and the maximum value should be selected. For illustration, we have selected the midpoint.

Table 8. Upper end of speed-flow distribution, I-4, Orlando, Florida.

Location Vehicles Per Hour Per Lane
95th percentile 99th percentile 99.5 percentile Maximum Selected Capacity
I-4 at EE Williamson_EB 1,687 1,889 1,944 2,101 2,023
I-4 at EE Williamson_WB 1,704 1,905 1,959 2,119 2,039
I-4 at Kaley Ave_WB 1,445 1,554 1,594 1,752 1,673
I-4 at Michigan Ave_EB 1,515 1,619 1,652 1,836 1,744
I-4 at Wymore_EB 1,999 2,116 2,144 2,247 2,195
I-4 at Wymore_WB 1,892 2,051 2,101 2,315 2,208
I-4 East of Wymore_EB 1,991 2,099 2,124 2,231 2,177
I-4 East of Wymore_WB 1,915 2,071 2,117 2,344 2,231
I-4 West of SR 408_EB 1,500 1,633 1,677 1,820 1,749

Scatter graph depicts a Speed-Flow diagram that generally conforms to those shown in the Highway Capacity Manual and produced by other researchers.  The highest volume is approximately 1800 VPHPL while the highest speed is just less than 80 mph.
Figure 38. Graph. Plot of speed and vehicles per hour per lane on I-4 at Kaley Avenue, westbound.

Scatter graph depicts a Speed-Flow diagram. The highest volume is approximately 1900 VPHPL while the highest speed is around 64 mph.
Figure 39. Graph. Plot of speed and vehicles per hour per lane on I-4 at Michigan Avenue, westbound.
Scatter graph depicts a Speed-Flow diagram. The highest volume is close to 2300 VPHPL while the highest speed is approximately 70 mph.
Figure 40. Graph. Plot of speed and vehicles per hour per lane on I-4 at Wynmore, eastbound.

Scatter graph depicts a Speed-Flow diagram. The highest volume is around 2300 VPHPL while the highest speed is close to 70 mph.
Figure 41. Graph. Plot of speed and vehicles per hour per lane on I-4 at Wynmore, westbound.

Scatter graph depicts a Speed-Flow diagram. The highest volume is about 2300 VPHPL while the highest speed is approximately 80 mph.
Figure 42. Graph. Plot of speed and vehicles per hour per lane on I-4 east of Wynmore, eastbound.

Scatter graph depicts a Speed-Flow diagram. The highest volume is a little less than 2400 VPHPL while the highest speed is roughly 70 mph.
Figure 43. Graph. Plot of speed and vehicles per hour per lane on I-4 east of Wynmore, westbound.

A second approach to determining capacity is heuristic: adjust (calibrate) the capacity value in the model to match local traffic conditions. Here, either speed data from ITS detectors or vehicle probes can be used; volumes are not used in the calibration but must be input in the models separately:

  • From the speed data in the corridor, identify existing bottlenecks. These will be locations (detectors or links) where queues are building upstream while downstream speeds are relatively high, unless influenced by another bottleneck (Figure 44).
  • Set the bottleneck segment to a capacity according to the HCM capacity estimate based on facility free-flow speed (e.g., 2,400 pcphpl for 70 mph free-flow speed (FFS). Run the model.
  • If a queue does not result, lower the capacity value in increments of 50 pcphpl until a queue results and speeds in the queue roughly match the field data.
Heat map displays speed contours for a freeway segment from 10/01/2004 00:00:00 to 10/31/2004 23:59:59. The days included are Tuesday, Wednesday, and Thursday. The speed values range from 25-75 mph.
Figure 44. Graph. Speed contours from archived intelligent transportation system detector data are useful for identifying bottleneck locations.
Source: System Metrics Group and Cambridge Systematics, unpublished presentation to California Department of Transportation, August 26, 2005.

Determining Free Flow Speed (FFS)

Both ITS detector and vehicle probe data can be used to determine the FFS of a facility. A variety of approaches can be used to determine the FFS. The first method uses only speed data and can be used on both freeways and signalized highways. It is based on selecting the 85th percentile speed under very light traffic conditions, defined as occurring morning hours on weekend days (e.g., from 6:00 AM to 10:00 AM). The second method uses freeway ITS detector volume and speeds. It determines the 85th percentile speed when volumes are low and speeds are relatively high (to weed out the low volumes that occur in extreme queuing conditions).

Table 9 compares these approaches for the freeways in the Atlanta, Georgia, region. The values in this table are computed for the entire facility, which includes multiple segments. To calculate this value, the space mean speed (SMS) over the entire facility is first calculated by the smallest time interval in the data (in this case, 5 minutes). With detector data, because companion volumes are available, SMS is the facility vehicle-miles of travel (VMT) divided by the facility vehicle hours of travel. With probe data, SMS is the total distance divided by travel time in hours. Then, a distribution is developed from those facility space mean speeds.

Table 9. Facility free-flow speed calculations, Atlanta, Georgia, area freeways.
Section Weekend FFS Calculation Method (mph) Low Volume FFS Calculation Method (mph)
I-75 Northbound from I-285 to Roswell Road 72 70
I-75 Southbound from I-285 to Roswell Road 69 68
I-75 Northbound from I-20 to Brookwood 69 69
I-75 Southbound from I-20 to Brookwood 68 67
I-285 Eastbound from GA-400 to I-75 71 69
I-285 Westbound from GA-400 to I-75 69 68
I-285 Eastbound from GA-400 to I-85 68 66
I-285 Westbound from GA-400 to I-85 68 68
I-75 Northbound from Roswell Road to Barrett Parkway 71 70
I-75 Southbound from Roswell Road to Barrett Parkway 69 69
FFS = free flow speed

Identifying Demand (Volumes)

Data from ITS freeway detectors can be used to develop input volumes at the hourly or finer level. The main issue with using detectors on already-congested freeways is that, under queuing, the volume they measure is lower than the actual demand. If this is the case, then the following procedure can be used. It is based on moving upstream from where queues typically occur to locate a detector that is relatively free of queuing influence, but its traffic patterns are similar to the facility being studied. If a major interchange is present between the study detector and the facility, traffic patterns are likely to be different. If the study detector meets these criteria, it is used to develop hourly factors for extended peak periods that can be applied to volumes on the freeway facility. The extended period should cover times on either side of the peak where queuing is not normally a problem (i.e., 1 hour before the peak start and 1 to 2 hours after the peak end). The factors are computed as the hourly volume divided by the extended peak volume. These are then applied to the extended peak period volumes on the facility.

Once demands have been created from the congested volumes, accurate temporal distributions can be created for the study section, such as the percent of daily traffic occurring in each hour. Also, the variability in demand can be established since some reliability prediction methods require knowledge about demand variability.

Determining Incident Characteristics

Advanced modeling methods require basic data about incidents. For example, annual number and average duration of the following incident categories:

  • Crashes - property damage only.
  • Crashes - minor injury.
  • Crashes - major injury and fatal.
  • Non-crashes - disabled vehicle/lane blocking.
  • Non-crashes - disabled vehicle/non-lane blocking.

Two sources of data are used to develop these inputs: (1) traditional crash data, and (2) archived incident management data. Crash data hs details on crash severity; whereas, incident data may or may not. Conversely, crash data does not include duration information. One approach is to use state- or area-wide values for crash severity and then to rely on incident management data. The first step is to estimate incidents by type. Typical categories in incident management data are crashes, disabled vehicles, and debris. Table 10 shows data taken for I-66 in Virginia, from the I-495 Capital Beltway west for 30 miles.

Table 10. Incident characteristics for I-66, Virginia.
CRASH
Obs Lateral Location Frequency Duration
Average Standard Deviation Median 95th Percentile
1 1 Lane Blocked 366 59.2505 75.359 43.5 1.29
2 2 Lanes Blocked 158 67.7142 55.357 55.5 1.54
3 3+ Lanes Blocked 67 77.597 60.302 57 1.82
4 No Blockage 212 52.7137 111.603 32 1.39
5 Shoulder Blocked 531 69.5431 101.793 51 1.53
DEBRIS
Obs Lateral Location Frequency Duration
Average Standard Deviation Median 95th Percentile
1 1 Lane Blocked 68 53.0466 65.727 31.5 242
2 2 Lanes Blocked 5 83.6 103.667 38 258
3 3+ Lanes Blocked 2 26.5 20.506 26.5 41
4 No Blockage 54 33.6296 38.253 27 94
5 Shoulder Blocked 27 50.5926 50.4 45 165
DISABLED
Obs Lateral Location Frequency Duration
Average Standard Deviation Median 95th Percentile
1 1 Lane Blocked 264 58.126 91.48 32.5 169
2 2 Lanes Blocked 12 49.833 21.02 53 79
3 3+ Lanes Blocked 7 263.143 469.49 38 1297
4 No Blockage 638 577.226 9908.04 8.5833 85
5 Shoulder Blocked 1552 54.147 118.88 29 182
Obs = observation
Source: Margiotta, Richard, unpublished internal memo for SHRP 2 Project L08, August 26, 2005

Impact and Lessons Learned

To explore the various investment options preferred by the public, several financial scenarios were prepared for the MPO board's consideration. The scenarios varied by spending pattern and by whether they included new funding sources for transportation, such as mobility fees on new development or increases in the gas tax or sales tax. The final outcome of the process was that operations projects were successfully evaluated on an equal basis with traditional projects and are now part of the LRTP.

Performance of Before and After Studies to Access Projects and Program Impacts

Need

A key component of performance management is the ability to conduct evaluations of completed projects and to use the results to make better-informed investment decisions in the future. State departments of transportation (DOTs) and MPOs routinely invest great effort in forecasting the impact of capital, operating, and regulatory improvements on general and truck traffic trips, but practitioners seldom have the opportunity to evaluate the outcomes of their investments, except in isolated cases where special studies are conducted.

However, no standardized method for conducting before and after evaluations with these data sources exists, especially for operations projects. Although a rich history of evaluations has been accumulated, the vast majority of studies lack a consistent method; common performance measures; and, most problematic, controls for dealing with factors that can influence the observed performance other than the project treatment.39 A thorough treatment of an evaluation methodology is beyond the scope of this primer. Regardless of what a comprehensive methodology entails, at its core, it will be examining travel time and demand data. The remainder of this section focuses on the use of these data types.

Relevant Data

A comprehensive and statistically valid evaluation methodology will have to draw on a variety of data types to establish controls. Many factors can influence travel time besides an improvement project, including incidents, demand (e.g., day-to-day and seasonal variations and special events), weather, work zones, traffic controls, and general operations policies. Ideally, the influence of these factors is stable (or nearly so) in the before and after periods of project implementation, which allows for observed changes in travel times to be untainted. In theory, all of these data types can be obtained from archived operations data, although in practice, some operations may choose not to archive certain types of data.

Analytical Procedures

Define the Geographic Scope of the Analysis

The geographic scope is driven by the target area of the project. For example, if an analysis is performed to pinpoint the safety benefits from the implementation of an incident management strategy over a stretch of a freeway, then the scope would be limited to that stretch. On the other hand, if the objective is to quantify safety improvements for an entire downtown area, then the scope would cover all arterials and roads in the downtown area.

In operations evaluations, a "test location" typically means a roadway section that has implemented certain operations strategies or has seen the impacts of such strategies. For analysis, roadway sections are typically 5 to 10 miles in length. If a larger area is covered, it is advisable to break it up into multiple sections. Study sections should be relatively homogenous in terms of traffic patterns, roadway geometry, and operating characteristics.

Define Analysis Period

The temporal aspects that should be considered in before and after evaluations are:

  • Time period. Because of the need to compute reliability, a year's worth of data in each of the before and after periods is preferred. Six months is an absolute minimum, but it is likely that seasonal effects will come into play unless the same months of different years are used.
  • User-defined peak periods (AM and PM), mid-day, and off peak periods. The peak periods should be defined for weekday non-holidays. At a minimum, the peak periods should be analyzed; other periods may be added at the analyst's discretion. Note that, in some cases, the time period of interest may be different from those mentioned (e.g., weekends in rural recreational areas may be the focus of an operational treatment).

Establish Travel Time Performance Measures

Many different types of performance measures are available. Below is a non-exhaustive list that can be considered:

  • Travel time (minutes) - Because the project length or coverage is usually the same in the before and after periods, straight travel time may be used as a performance measure.
  • Mean travel time index (TTI) (unitless) - The ratio of average travel time to the ideal or free-flow travel time. For evaluating an individual project, the ideal or free-flow travel time should be the same in the before and after periods. Although straight travel time is recommended also, he TTI is useful because it is normalized and the project may be compared to others with different project lengths.
  • Delay (vehicle-hours and person-hours) - The actual vehicle- or person-hours of travel that occur minus those that occur under free-flow/ideal conditions. Delay is a useful measure because economic analyses have a long history of applying monetary value to delay.
  • PTI - The 95th percentile travel time divided by the free-flow travel time.
  • 80th percentile TTI.
  • Incident Performance Measures:
    • Total incidents by type: crashes, stalls, and debris.
    • Incident rate (incidents per 100 million VMT).
    • Incident duration: mean and standard deviation.
    • Lane-hours lost due to incidents (number of lanes closed multiplied by the number of hours they are closed).
    • Shoulder-hours lost due to incidents.

Develop Perormance Measures from the Data

The following example illustrates the initial steps in conducting a before and after evaluation, recognizing that more sophisticated analysis using statistical controls should be done next.

An operational improvement is made on a freeway section. It was implemented quickly, with minimal disruption to traffic. Travel time and volume data were available for 1 year prior to the completion of the improvement and 1 year after. VMT, mean TTI, and PTI were computed for each week and plotted (Figures 45 and 46). Also, incident lane-hours and shoulder-hours were computed (Table 11).

Line graph is divided into halves to show the "before" case (without improvement) on the left and the "after" case (with improvement) on the right. The results show that the improvement reduced the mean TTI, but VMT was roughly similar in the before and after periods.
Figure 45. Graph. Hypothetical before and after case: mean travel time index and vehicle-miles of travel.

Line graph is divided into halves to show the "before" case (without improvement) on the left and the "after" case (with improvement) on the right. The results show that the improvement reduced the PTI but the VMT was roughly similar in the before and after periods. As expected, the PTI shows more volatility than the mean TTI because it measures extreme cases, which can vary greatly from week to week.
Figure 46. Graph. Hypothetical before and after case: planning time index and vehicle miles of travel.


Table 11. Incident blockage characteristics for case study.
Period Incident Lane-Hours Lost Incident Shoulder-Hours Lost
Before 35.3 51.3
After 26.7 66.1

The results show that the improvement reduced both the mean TTI and PTI and that VMT was roughly similar in the before and after periods. As expected, the PTI shows more volatility than the mean TTI because it measures extreme cases, which can vary greatly from week to week. Incident lane-hours lost is higher in the before period, while incident shoulder-hours lost are higher in the after period. A more thorough examination of incidents, as well as other factors, should be pursued to ensure that the congestion and reliability improvement are due to the project and not to other factors.

Impact

Development of an ongoing performance evaluation program enables planning agencies to make more informed decisions about investments. The results of evaluations can be used to highlight agency activities to the public and to demonstrate the value of transportation investments. The knowledge gained from evaluations can lead to development of impact factors for future studies, as well as an understanding of where and when certain types of strategies succeed or fail. These impact factors, tailored to an agency's particular setting, are useful for future planning activities when alternatives are being developed. For example, evaluations of the conditions under which ramp metering is effective can avoid future deployments in situations where they will not be successful.

Case Study: Florida Department of Transportation Evaluation of the Port of Miami Tunnel Project

Overview

The Florida DOT (FDOT) used archived data to evaluate the impact tat the Port of Miami Tunnel project had on performance. The Port of Miami Tunnel project was built by a private company in partnership with FDOT, Miami-Dade, and the City of Miami. By connecting State Route A1A/MacArthur Causeway to Dodge Island, the project provided direct access between the seaport and highways I-395 and I-95, thus creating another entry to Port of Miami Tunnel (Figure 47). Additionally, the Port of Miami Tunnel was designed to improve traffic flow in downtown Miami by reducing the number of cargo trucks and cruise-related vehicles on congested downtown streets and aiding ongoing and future development in and around downtown Miami.

The map shows the Port of Miami and an outline of the proposed tunnel.
Figure 47. Map. The Port of Miami tunnel project.
Source: Florida Department of Transportation, Port of Miami Tunnel, Project Maps. Available at: http://www.portofmiamitunnel.com/press-room/project-maps/.

Data

FDOT used travel time data from NPMRDS and traffic volume data from their own system to conduct the evaluation.

Agency Process

The purpose of this study is to examine the potential impacts to the transportation system resulting from the opening of the Port of Miami Tunnel. The study utilizes tabular and geospatial data from FDOT and the NPMRDS. Using Geographic Information System, these data sources are merged together spatially to create value-added information that would otherwise be unavailable. The resulting information provides FDOT with a better understanding of what impacts there are to the transportation system, the magnitude of these impacts, and where these impacts are occurring. Impacts considered in the study are:

  • Differences in truck flows using truck count data from FDOT.
  • Congestion - travel time data from HERE to compare and analyze average travel speeds (pre- and post- tunnel) within the tunnel vicinity.

This study compares the average weekday travel speeds during peak hours in August 2013 and August 2015. The speed data for August 2013, was collected prior to the opening of the Port of Miami Tunnel. Data for the corresponding month in 2015 was collected after the opening of the Port of Miami Tunnel (Figure 48). In Figure 48, the pie symbols represent differences in truck counts, with 2015 data shown in dark blue and 2013 data shown in light blue. The red and green lines represent a decrease in average weekday travel speed and increase in average weekday travel speed, respectively. Results from the evaluation showed:

  • Improved or similar travel speeds through downtown Miami.
  • Improved or similar travel speeds along MacArthur Causeway eastbound, portions of I-395 East, and portions of Biscayne Boulevard.
  • Slower speeds along portions of I-395 West, I-95, and Biscayne Boulevard westbound.

Impact and Lessons Learned

Analysis of the transportation system resulting from the opening of the Port of Miami Tunnel is just an example of how powerful value-added tabular and spatial data can be. The long-term vision is to empower planners, freight coordinators, consultants, and others throughout FDOT with value-added data and information that can assist with decision making and the achievement of federal guidelines.

Although working with very large and complex data sets presents a number of challenges, including human resource, information technology (IT), and financial availability, the long-term benefits are expected to outweigh the challenges. Because much value-added data is generated via spatial analysis, disparate geometric (geographic information system) networks should be edited and aligned with each other in an ongoing fashion. With the various networks merged together, the attributes associated with these networks can work together via spatial coincidence. The value-added tabular and spatial information generated through a variety of complex mechanisms will be made available throughout FDOT, including Central Office in Tallahassee, Florida, and the District offices throughout the state.

Map of the area surrounding the proposed Port of Miami tunnel in which pie symbols on the various roadways within the network represent differences in truck counts, with 2015 data shown in dark blue and 2013 data shown in light blue. Red and green lines overlaid on the major roadways represent a decrease in average weekday travel speed and increase in average weekday travel speed, respectively.
Figure 48. Map. Truck flow and speed impacts of the Port of Miami tunnel project.
Source: O'Rourke, Paul, Florida Department of Transportation, 2016, Unpublished. ©OpenStreetMap and contributors, CC-BY-SA.

Identification of Causes of Congestion

Need

The prior case study identifies how patterns in congestion can be identified, ranked, and prioritized. However, simply identifying locations with problems is not enough; analysts need to know "why" a problem is occurring regardless of whether it is a recurring problem or nonrecurring problem. Perhaps the congestion is related to a geometric issue with the road, or maybe it is incident or event related. Maybe it is simply a capacity issue.

The analysis also can be reversed: if a large incident has just occurred, how can the impacts (environmental or otherwise) of the related congestion and delays be measured? In this case study, the following will be considered:

  • Determine the cause of the recurring congestion.
  • Determine the cause of the non-recurring congestion.
  • Examine the effect of events, incidents, or policy decisions on congestion.

Relevant Data

The data required to identify patterns and investigate cause and effect is not that different from the FDOT Port of Miami Tunnel case study above. An agency will need a data source that first can identify the presence of congestion. This could be a dense network of point-sensor data (e.g., inductive loops, side-fired radar/microwave, video detection) or it could be probe-based speed and travel time data (from commercial third parties, such as HERE, INRIX, and TomTom, or from Bluetooth or WiFi detection systems). The above data helps to identify the presence of congestion.

To identify the cause of the congestion, the following data sources may be of value: traffic management center ATMS event logs, police accident records, weather radar, road weather measurements, aerial photography, and volume data profiles.

Analytical Procedures

Operations data from probes or other high-density sensors can be used to identify congested areas or bottlenecks. When these data are comingled with ATMS event data, it becomes possible to identify if the congestion is recurring or non-recurring. Each agency may define congestion slightly differently, but in its most basic sense, congestion is any time speeds and travel times drop below some desired threshold. For some, that threshold is the speed limit. For others, a segment of a road becomes congested when the space mean speed on it falls below some percentage of the speed limit. Queues can be measured on the roadway by simply adding adjacent road segments that meet the same condition.

Flow data from probes and dense sensor networks can sometimes be challenging to work with because of the sheer volume of data, complex linear referencing systems that might not match up with agency centerline road databases, and agency technical and resource capacity. A relatively small state, Maryland, has nearly 12,000 road segments. With each segment providing data every minute, that comes to well over 6.3 billion data points every year.

To conduct a causality analysis, an agency must be able to associate the non-speed/flow data elements to the roadway. ATMS incident, event, or construction data and police accident data records will likely need to be assigned to a specific point on the roadway or a roadway segment. This could require some basic geospatial queries, but in certain cases, may be more complex. Some speed data from third parties is provided in a proprietary geospatial standard referred to as traffic message channel segments. This georeferencing system will likely be different than most agency centerline road networks. Furthermore, some police accident records only store their data in log-miles or other georeferencing systems. Aligning all of these different linear referencing systems can be time consuming.

As with police incident and ATMS data, weather data will need to be merged with the congestion information so that the two can be correlated. Road weather data directly from road weather information systems (RWIS) stations can usually be linked in the same manner as ATMS records; however, RWIS stations are few and far between and do not offer great coverage. Weather data from the National Weather Service is fairly ubiquitous, but can be more complex to work with. Radar reflectivity data that represents precipitation rates or snowfall can be associated with road segments using geospatial queries. There are commercial third-party road weather data products for sale that can be used in a similar way. They are usually pre-processed to reflect conditions directly on the roadway, and will be associated with various road segments in the same manner as probe-based speed data.

Once all of the disparate operations data are joined both geospatially and temporally, real analytics can begin. The analysis can range from big-data style correlation of coefficients—searching for any possible pair or set of variables to see if there might be a correlation between them. However, the analysis can be less complex: beginning with identifying a problem location, and then considering additional questions (what percentage of the time was the congestion at that location related to weather and/or incidents, and was the congestion greater when weather was a factor?). The opposite can occur; first identify major weather events, and then determine if congestion was worse in various locations during those events compared to normal. These types of analyses are discussed in the real-world case studies below.

Agencies with strong, integrated IT departments, dedicated data analysts, sufficient data storage hardware, and access to statistical software such as SAS can handle this type of analysis given time and proper funding. However, many agencies simply subcontract their analysis to consultants or university transportation departments, or they purchase off-the-shelf congestion analytics software and data integration services tailored for transportation planners.

Impact

With the advancement of tools and technologies, roadway problems are more easily identified. However, significant effort is needed to effectively understand why a problem exists at a particular location—ultimately leading to the identification of solutions to the various problems. Impact and causality analysis are the end goal of archived operations data analytics. Impact analysis affords the opportunity to change a simple observation (e.g., last week's snowstorm caused significant delays in the region) into a more detailed observation (e.g., last week's snowstorm impacted 3 million travelers, costing some $45 million in user delay and excess fuel consumption, and resulting in 60 serious collisions that claimed the lives of three individuals in two separate families).

Progressive agencies that use archived operations data can further justify their investment (e.g., while the impacts of last week's snowstorm were significant and deadly per the prior statement, the damage could have been worse. The roads were pre-treated with 100,000 tons of sodium chloride brine, which reduced the plowing cleanup time by nearly 9 hours. Furthermore, safety service patrols reduced the clearance time of the 60 serious collisions by over 20 hours. These actions saved area commuters $32.8 million in excess delay, wasted fuel, and the reduction of secondary accidents).

These last two statements are only possible with a robust data archive. The results of such an analysis allow agencies to better plan decisions for future storms, better justify operating budgets, and respond to questions from the media and public officials in a more complete and timely manner.

Case Study 1: Maryland State Highway Administration Examines Causes of Recurring and Non-Recurring Congestion

Overview

This case study determines the cause of both recurring and non-recurring congestion, by discussing existing tools, hiring a consultant, and doing the study in-house.

The Maryland State Highway Administration (MD SHA) routinely evaluates the impacts of congestion on its roadways with a focus on understanding why congestion is occurring, how significant the congestion is in terms of measurable impacts (e.g., queues, user delay costs), determining potential solutions to the congestion (e.g., projects, operational strategies), and then measuring the effectiveness of the chosen solution.

Data

Five archived operations data sets are used by MD SHA to perform their analysis: probe speed, volume, ATMS event, police accident, and weather. The probe data is derived from a private sector data provider. This probe data is archived at 1-minute intervals from real-time. The probe data represents the average speed across the length of a road segment for each 1-minute period.

The volume data is derived from two sources: HPMS-derived volume profiles and side-fired remote traffic microwave sensors (RTMS) detectors.

The event data is derived from two sources: the MD SHA Coordinated Highways Action Response Team (CHART) ATMS platform, and police accident reports collected and digitized by the Maryland State Police in coordination with MD SHA. The ATMS data is entered by operations personnel into their CHART platform in real-time. The type of information recorded includes the type of event (e.g., construction, collision, special event, disabled vehicle, police activity), location, lane closure information, and number of responders on scene.

The police accident reports contain additional details about the cause and nature of the incident, including the types of individuals involved and their conditions.

Weather data is a mixture of both weather radar data (coming from freely available National Weather Service data feeds) and RWIS station data. The RWIS stations are installed and operated by a DOT and include measurements such as wind speed and direction, precipitation rates, visibility, salinity content of the moisture on the roadway, and barometric pressure.

Agency Process

As with the case study on problem identification and confirmation with NJDOT, MD SHA uses a suite of third-party data analytics tools to collect, fuse, archive, and visualize their archived operations data. To evaluate the cause of congestion, MD SHA first determines where congestion is occurring. This can be accomplished using the probe data analytics suite and by following the same general process as described for NJDOT above.

Figure 49 shows an example of a bottleneck ranking query for September 2015 on all Maryland Interstates. By selecting the top-ranked congested location (I-270 southbound), the following can be determined:

  • The congestion usually lasts 2 hours and 17 minutes.
  • The queue usually backs up to just over 19 miles.
  • There were 104 events that fell within the geography of this congested location.
  • The bulk of the congestion at this location occurs between 5:30 AM and 10 AM Monday through Friday, confirming this is primarily a recurring congestion event.
  • Significant congestion can occur outside of the morning rush hour, as can be seen by the outlier color bands in the afternoon and the nights.
A screenshot of the Bottleneck Ranking Tool from the Probe Data Analytics Suite developed by the University of Maryland CATT Laboratory. Labels on the image point out the ranked list of the most congested locations across the top, a map of the location of the currently selected bottleneck on the lower left, and a spiral graph on the lower right on which non-recurring congestion events (outlier) during the afternoon and night as well as recurring AM rush-hour congestion events (Monday through Friday) are highlighted.
Figure 49. Screenshot. The bottleneck ranking tool.
Source: Probe Data Analytics Suite. University of Maryland CATT Laboratory.

Once a user determines when and where congestion is occurring, he or she can select additional tool overlays (e.g., construction, incident, special event) to map out data on top of the time spiral to help identify event causality. These event icons are interactive. Figure 49 shows a couple of afternoon outliers being the result of significant events. The one highlighted below happens to be an injury accident that occurred on a Saturday afternoon.

A screenshot of the Bottleneck Ranking Tool using INRIX data from the Probe Data Analytics Suite. In this figure, yellow diamonds denote incidents or disabled vehicles, red diamonds denote injury or fatality incidents, and orange diamonds denote construction or work zone events. Selecting the congestion lines causes the tool to display a congestion scan graphic of the speeds and travel times for that particular day and this particular corridor.
Figure 50. Screenshot. Congestion time spiral graphic with incidents and events overlaid.
Source: Probe Data Analytics Suite. University of Maryland CATT Laboratory.

In Figure 50, yellow diamonds denote incidents or disabled vehicles, red diamonds denote injury or fatality incidents, and orange diamonds denote construction or work zone events. Selecting the congestion lines then displays a congestion scan graphic of the speeds and travel times for that particular day and this particular corridor (as seen in Figure 57).

For further study and use, the user can select the congestion event in the time spiral, which will create other graphics depicting additional elements.

  • The congestion on that stretch of road for that specific day of the week (Figure 51).
  • The congestion on that stretch of road on that specific day compared to similar days that same month (Figure 52).
  • The estimated user delay cost on that road and that direction of travel by hour of day.
Heat map depicting the result of a congestion scan on a specific road segment for one day of the week.
Figure 51. Screenshot. Congestion scan graphic of outlier congestion showing both north- and southbound traffic on I-270.
Source: Probe Data Analytics Suite. University of Maryland CATT Laboratory.

There is unusual congestion on a Saturday evening in the southbound direction. Incident overlays show that this was caused by an injury incident (bottom of the screen), and that a vehicle also became disabled towards the back of the queue.

Heat map depicting the result of a congestion scan on a specific road segment comparing every Saturday with one Saturday. In this image, the display options window is also active.
Figure 52. Screenshot. Congestion scan graphic including Saturdays.
Source: Probe Data Analytics Suite. University of Maryland CATT Laboratory.

Editing the display options, the analyst has now chosen to add an additional date range to the graphic. All Saturdays during the month of September are also drawn on the screen in the outside columns of data. Note that there is little or no congestion "normally" on Saturdays in the southbound direction. This analysis confirms that the congestion and incidents on September 19 (innermost columns) are non-recurring.

The agency can use other analytic tools found within the RITIS platform to review safety data from police accident reports or ATMS event/incident reports to determine if incident hot spots exist that may be globally affecting congestion on the roads in question. Some of these tools allow extremely deep querying and causality analytics. Figure 53 shows how heat maps can be drawn overtop of the roadway to show when and where collisions, weather events, or construction projects are most likely to occur for any month, day of week, or other time range during the year.

Figure 54 shows how the tool also can be used for deep analysis (e.g., correlation of coefficients, standard deviation of frequencies, number of outliers, maximum frequency, number of unique pairs, uniformity of the distribution) to further identify patterns, trends, and outliers within the data. The correlation of coefficients ranking function can be used to verify if any pair of variables within the data set might be affecting each other (e.g., does a wet road highly correlate with fatal collisions, or does the day of the week in which an incident occurs correlate with hazmat incidents or the number of vehicles involved?).

Combination map and graph shows how heat maps can be drawn overtop of the roadway to indicate when and where collisions, weather events, or construction projects are most likely to occur for any month, day of week, or other time range during the year. The graph below the map quantifies incident frequency for the period by type.
Figure 53. Screenshot. Heat maps function within incidents clustering explorer.
Source: Incident Cluster Explorer, University of Maryland CATT Laboratory.

In Figure 53, the heat maps in the upper right show where incidents have occurred over a given date range. A list of variables in the upper left is clickable, filterable, and interactive. Selecting a particular variable generates a histogram (bottom) that shows the breakdown of possible values for that variable. Here, the user has clicked on "incident type," which results in an interactive histogram depicting the number of each incident type occurring during the particular date range. Clicking on each bar of the histogram then highlights those incidents on the map above.

Screen capture of the Incident Cluster Explorer shows how the tool also can be used for correlation of coefficients, standard deviation of frequencies, number of outliers, maximum frequency, number of unique pairs, uniformity of the distribution) to further identify patterns, trends, and outliers within the data. The correlation of coefficients ranking function in the upper left allows a user to compare variable pairs to determine if any correlation exists.
Figure 54. Screenshot. Correlation of coefficients ranking function in incidents clustering explorer.
Source: Incident Cluster Explorer, University of Maryland CATT Laboratory.

In Figure 54, the correlation of coefficients ranking function in the upper left allows a user to compare variable pairs to determine if any correlation exists. Clicking on any of these pairs of variables then brings up detailed two-dimensional histograms (not shown) that make it easy for users to do a deeper dive into the various relationships, trends, and statistics related to those variables.

Impacts and Lessons Learned

This type of analysis has historically been difficult for MD SHA to conduct, as their data sets have been unavailable. Creating a central data repository and developing tools that make it faster and more intuitive to ask questions has dramatically improved productivity and understanding within the organization. Historically, problems were identified through laborious studies that often involved dedicated data collection. These studies may have only been initiated if someone within the community complained enough about the congestion. Presently, with more ubiquitous data from probes and the integration of incident and event data from both law enforcement and the ATMS platforms, the agencies are able to identify problems, causality, and overall impact before the public raises concerns, and better respond when they do.

In addition to answering these questions more quickly, NJDOT is more confident in the results of their studies. The ease of access to the data, coupled with the compelling visualizations, result in analysts being able to effectively communicate with different audiences.

Case Study 2: Metropolitan Area Transportation Operations Coordination Examines Effects of Events, Incidents, and Policy Decisions on Congestion

Overview

Unlike the case study above, which first identified heavily congested locations and then tried to determine causality, this case study reviews how to examine the effect of events, incidents, or policy decisions on congestion.

The Metropolitan Area Transportation Operations Coordination (MATOC) program is a coordinated partnership between transit, highway, and other related transportation agencies in the District of Columbia, Maryland, and Virginia, that aims to improve safety and mobility in the region through information sharing, planning, and coordination. MATOC is frequently called upon to help agencies perform after action reviews of major incidents, produce impact statements post-event for the media, or help agencies measure the impact of certain operations decisions (e.g., closing the federal government due to winter weather, closing certain roads, responding in a certain way to an event).

This case study shows how MATOC used the archived operations data from the agencies that they support to help measure the impacts of delaying the opening of schools and the Federal government by 2 hours.

Data

The data for this case study are the same as that used for the case study discussion above; however, there is a greater emphasis on weather data—both RWIS and weather radar from the National Weather Service.

Agency Process

MATOC uses a series of tools found within RITIS for analyzing weather impacts, user delay cost, and travel times.

After a winter weather event ends, MATOC is usually contacted by various groups (e.g., the media, transit agencies, DOTs, MPOs, legislators) to help answer questions about the impacts of a particular event on the region. MATOC follows this process to develop graphics, statistics, and reports:

  1. Choose a corridor of significance that will help tell a compelling story. The corridor must be of regional significance and a general pain point for commuters.
  2. Use the probe data analytics suite or the detector analytics tools to run travel time queries for a common commute route on the corridor of significance.
  3. Use the RWIS explorer and/or the other historic weather tools within RITIS to determine which dates did or did not have winter weather, other precipitation, construction, or other incidents.
  4. Run a user delay cost query to determine the financial impacts of the winter weather event on the corridor (or the entire region, if needed).
  5. Develop an animated trend map that shows traffic conditions during the winter weather event compared to normal days without events.
  6. Compile all of the outputs listed above into a report or embed the results online using embedding and exporting support tools within the RITIS analytics.

Below are the outputs of each of the steps listed above.

Steps 1 and 2

In Step 1, the MATOC analysts determined that the I-66 corridor leading into Washington D.C. would be a well-known corridor of significance. A 20-mile section of the corridor was selected, and a travel time query was run for the day of the 2-hour delayed opening. Simultaneously, a query was run for two similar days of the week during the same season. Figure 55 is the result of this query. Note that the travel time during normal rush hours is nearly double that of the travel times during the 2-hour delayed opening. Two hours later, a slightly larger rush hour occurred. Note that, in the evening, the rush hour travel times are nearly the same as the normal rush hour; however, this rush hour is only 1 hour later than normal instead of 2 hours later. Therefore, it appears that most individuals went to work 2 hours later than normal but only stayed 1 hour later.

Line graph depicts Travel Time (minutes) over Hour of Day for I-66 northbound on a given day. In this instance, the graph indicates that most individuals went to work 2 hours later than normal but only stayed 1 hour later.
Figure 55. Graph. Plot of travel time by hour of day for I-66, created via Regional Integrated Transportation Information Systems Vehicle Probe Project Suite.
Source: Metropolitan Area Transportation Operations Coordination (MATOC) Program Facilitator, Taran Hutchinson.

Figure 56 depicts travel times along I-66 during normal rush hours compared to travel times when there was a winter weather event that resulted in a 2-hour delayed opening for most schools and the Federal Government.

Step 3

To ensure that the analysis is based on accurate data, MATOC should verify that the other date ranges for which they are comparing the winter weather event are clear of other events (weather or incidents). This can be done via the probe data analytics suite by overlaying historic National Weather Service radar data and historic incident/event data on top of any date/time in the past (Figure 56). This visually confirms that a date or date range is safe for inclusion in the analysis.

Screen capture identifies bottlenecks and events in a panel to the left and contains a map of the National Capital region to the right. The roadways on the map are color coded to indicate congestion, and a weather radar overlay shows severe weather cells in the area impacting travel.
Figure 56. Screenshot. Historical dates and times to visualize weather and incident details during that date and time.
Source: University of Maryland, Probe Data Analytics Suite.

In the tool shown in Figure 56, after the date is entered, the map redraws itself to show what was happening at that particular time including: weather radar, incidents and events, bottlenecks, and speeds. The left side of the screen shows a ranked list of all of the bottlenecks that were occurring at that time.

Alternatively, MATOC can look at RWIS station data (if there is enough coverage) to see if road weather may have impacted travel conditions. This particular analytics tool from RITIS (seen in Figure 57) allows MATOC to select any date range and view weather conditions, such as precipitation rates, visibility, and surface conditions, while simultaneously viewing incident/event data, and flow data. This tool helps evaluate the effects of all forms of weather on congestion and incidents. This allows users to compare data from road weather information systems stations (e.g., precipitation rates, visibility, wind speeds, surface conditions) to other archived data (e.g., speeds, volumes, flow, incidents, events).

Screenshot from the Clarus History Explorer, shows a series of line graphs that plot speed, volume, traffic events, precipitation, and visibility, allowing the user to correlate weather impacts for a given period.
Figure 57. Screenshot. The road weather information systems history explorer.
Source: Clarus History Explorer: University of Maryland CATT Laboratory.

Step 4

In Step 4, MATOC runs a user delay cost query on the I-66 corridor to determine the financial impacts of the winter weather compared to normal user delay on the same corridor. Figure 58 shows the resulting user delay for the morning snow event discussed in this case study. Note how much higher the user delay is during the morning of the snow event compared to normal.

Screen capture of a User Delay Cost Table from the University of Maryland's Probe Data Analytics Suite. Each cell represents 1 hour of the day on a particular day of the week, and each row represents one 24-hour period. The cells are color coded to represent hourly user delay in thousands of dollars. The bottom row shows the Hourly Totals. The right-most column shows the Daily Totals.
Figure 58. Screenshot. Visual representation of the cost of delay for both passenger and commercial vehicles.
Source: University of Maryland Probe Data Analytics Suite.

Screen capture of two versions of an interactive animated map depicting the capital beltway region. The map on the left shows conditions during a winter weather event (speeds are less than 30 mph) and the map on the right depicts conditions at the same location and time but during a standard weekday (speeds are just over 50 mph).Figure 59. Screenshot. An interactive animated map showing conditions during a winter weather event (left) compared to conditions during normal days of the week (right).
Source: University of Maryland CATT Laboratory.

Step 5

Here MATOC runs a similar query to that in Steps 1 and 2; however, the output is an animated trend map instead of a simple line chart. The animated trend map shows travel conditions on additional corridors and shows a side-by-side comparison of travel speeds during the event compared to normal (Figure 59).

Step 6

Finally, the various statistics and visualizations previously mentioned are consolidated into a correspondence with a narrative description of the impacts of the event surrounding it. This narrative details the associated impacts of this weather event.

Impacts and Lessons Learned

MATOC's ability to efficiently produce these types of reports has allowed them to gain priority ranking for after action analysis and media requests, which helps secure their annual operating budget. MATOC now trains other agencies on how to perform their own analysis using these tools.

22 I-95 Corridor Coalition, Vehicle Probe Project web site. Available at: http://i95coalition.org/projects/vehicle-probe-project/. [ Return to note 22. ]

23 Go to https://vpp.ritis.org/suite/faq/#/how-are-bottleneck-conditins-tracked for information on bottlenecks, and https://vpp.ritis.org/suite/faq/#/calculations for information on user delay cost. [ Return to note 23. ]

24 Cambridge Systematics et al., Guide to Effective Freeway Performance Measurement: Final Report and Guidebook, Web-Only Document 97, Transportation Research Board, August 2006, Available at: http://onlinepubs.trb.org/onlinepubs/nchrp/nchrp_w97.pdf. [ Return to note 24. ]

25Kittelson & Associates et al., Incorporation of Travel Time Reliability into the Highway Capacity Manual, SHRP 2 Report S2-L08-RW-1,TRB, 2014. Available at: http://onlinepubs.trb.org/onlinepubs/shrp2/SHRP2_S2-L08-RW-1.pdf. [ Return to note 25. ]

26 Ibid. [ Return to note 26. ]

27Kittelson & Associates et al., Incorporation of Travel Time Reliability into the Highway Capacity Manual, TRB, 2014, Available at: http://onlinepubs.trb.org/onlinepubs/shrp2/SHRP2_S2-L08-RW-1.pdf. [ Return to note 27. ]

28 From SHRP 2 Report SR-L08-RW-1: Incorporation of Travel Time Reliability into the Highway Capacity Manual, Figure ES.2, p. 3. Copyright, National Academy of Sciences, Washington, D.C., 2014. Reproduced with permission of the Transportation Research Board. Available at: http://www.trb.org/Main/Blurbs/169594.aspx. [ Return to note 28. ]

29 From SHRP 2 Report SR-L08-RW-1: Incorporation of Travel Time Reliability into the Highway Capacity Manual, Table 2.1, p. 17. Copyright, National Academy of Sciences, Washington, D.C., 2014. Reproduced with permission of the Transportation Research Board. [ Return to note 29. ]

30 Cambridge Systematics, et al., Guide to Effective Freeway Performance Measurement: Final Report and Guidebook, Web-Only Document 97, Transportation Research Board, August 2006. Available at: http://onlinepubs.trb.org/onlinepubs/nchrp/nchrp_w97.pdf. [ Return to note 30. ]

31 For example: Cambridge Systematics et al., Guide to Effective Freeway Performance Measurement: Final Report and Guidebook, Web-Only Document 97, Transportation Research Board, August 2006, Available at http://onlinepubs.trb.org/onlinepubs/nchrp/nchrp_w97.pdf. Also: Cambridge Systematics, Inc. et al., Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies, The Second Strategic Highway Research Program, Report S2-L03-RR-1, Transportation Research Board (Washington, DC: 2013). [ Return to note 31. ]

32 The Texas Agricultural and Mechanical University's Urban Mobility Report demonstrates a method for deriving volumes to be used with vehicle probe data: http://mobility.tamu.edu/ums/rport/. [ Return to note 32. ]

33 Cambridge Systematics et al., Analytic Procedures for Determining the Impacts of Reliability Mitigation Strategies, SHRP 2 Report S2-L03-RR-1, Transportation Research Board, 2013. Available at: http://onlinepubs.trb.org/onlinepubs/shrp2/SHRP2_S2-L03-RR-1.pdf. [ Return to note 33. ]

34 For more information, see the Metropolitan Washington Council of Governments (MWCOG) Transportation Congestion Dashboard at: https://www.mwcog.org/congestion/. [ Return to note 34. ]

35 California Department of Transportation, Performance Measurement System. Available at: http://pems.dot.ca.gov/. [ Return to note 35. ]

36 Delaware Valley Regional Planning Commission, Transportation Operations Master Plan (Philadelphia: PA, 2009). Available at: http://www.dvrpc.org/Transportation/TSMO/MasterPlan/. [ Return to note 36. ]

37 Economic Development Research Group, Development of Improved Economic Analysis Tools Based on Recommendations from Project C03 (2), The Second Strategic Highway Research Program, Project C11, Transportation Research Board (Washington, DC: 2013). Available at: http://apps.trb.org/cmsfeed/TRBNetProjectDisplay.asp?ProjectID=2350. [ Return to note 37. ]

38 Kittelson & Associates et al., Incorporation of Travel Time Reliability into the Highway Capacity Manual, The Second Strategic Highway Research Program, Report S2-L08-RW-1, Transportation Research Board (Washington, DC: 2014). Available at: http://www.trb.org/Main/Blurbs/169594.aspx. [ Return to note 38. ]

39 United States Department of Transportation, Intelligent Transportation Systems (ITS) benefits database. Available at: http://www.itsbenefits.its.dot.gov/. [ Return to note 39. ]


Office of Operations