![]() |
![]() ![]() ![]() |
![]() |
![]() |
Evaluation Methods and Techniques | Advanced Transportation and Congestion Management Technologies Deployment ProgramChapter 3: Performance MeasuresThis chapter provides a set of recommended performance measures (PMs) to assist Advanced Transportation Congestion Management Technologies Deployment (ATCMTD) grantees in meeting the reporting requirements of the FAST (Fixing America's Surface Transportation) Act. As outlined in 23 U.S.C. 503(c)(4)(F), grantees must produce annual reports that describe the findings from their deployments, including data on benefits, costs, effectiveness, and lessons learned, among other data (see Develop Annual Report for specific FAST Act reporting requirements). In addition, 23 U.S.C. 503(c)(4)(G) requires the Secretary of Transportation to submit a Program Level Report (not later than 3 years after the date of the first grant award and each year thereafter) that describes how the program has:
The PMs presented below are intended to provide ATCMTD grantees a core set of measures. In developing the set of recommended PMs, several key criteria were utilized, to the extent possible. Namely, the measures should be:
While the measures tend to be quantitative and outcome-based, measures that rely on qualitative data are also presented, as ATCMTD grantees will want to include performance measures that reflect a mix of both quantitative and qualitative data. In designing their evaluations, the ATCMTD grantees should start with the performance measures described below; however, the list is by no means exhaustive. Grantees may want to include additional performance measures that are tailored to their specific deployments and that provide insight on the safety, mobility, agency efficiency, and other impacts of their technology deployments. It should be noted that projects will not necessarily address all of the performance areas. PMs should be selected based on the technology being deployed, the anticipated impacts, and data availability. The remainder of this chapter presents performance measures for each of the key performance areas outlined in the FAST Act:
The references at the end of this chapter list a number of useful resources, such as FHWA's Transportation Performance Management (TPM) Toolbox, which includes the TPM Guidebook and Resources (see https://www.tpmtools.org/about/). TPM measures and targets may provide grantees with a source of data to meet the grant performance measurement requirements. Improve SafetyTable 7 presents a number of safety-related performance measures, organized by mode of transportation. While they are generally prioritized within each mode, grantees must consider the measures that are most relevant to their specific deployments. That is, the selection of performance measures will depend on the technologies being deployed and what problem(s) they are trying to solve. Careful thought should be given to the specific type of safety benefits that are anticipated from the technology deployment. Nearly all of the PMs involve a measure of change (e.g., in crashes, fatalities or injuries), which is based on a comparison of data between a baseline (pre-deployment) period and a post-deployment period. The preferred type of measure is a rate, because it adjusts for the level of exposure; however, there may be cases where counts are the only data available (e.g., for bicycle or pedestrian measures). FHWA adopted five safety-related performance measures as part of the TPM program. These include total counts for fatalities, serious injuries, and (as a separate category) fatalities and serious injuries to non-motorized road users, and rates per 100 million vehicle miles traveled (VMT) for fatalities and serious injuries. These categorizations are covered within the more detailed list of performance measures listed below. The Safety Performance Management Final Rule also established methodological guidelines for reporting these measures, which grantees may find useful.2 Grantees should consider the use of multiple measures to understand the safety impacts of their technologies. In addition to crash records or field test data on crash precursors, survey data can provide a complement (but not a substitute) to these other data sources, providing useful data on user (e.g., drivers, transit operators, etc.) experience or attitudes. It is also important to consider the geographic scope when developing PMs. The measures included in Table 7 can be used at any geographic level (intersection, corridor, or region). However, it is important to note that as geographic scope decreases, random variation tends to increase, and thus intersection or even corridor-level analysis can be highly variable year to year. Any comparisons at these lower levels should be made with care. When reporting the performance measurement findings, grantees should clearly convey the geographic scope of the measures.
Improve Mobility/Reduce CongestionThis section highlights mobility and congestion related performance measures. The measures are organized by transportation mode, and are generally prioritized within mode. Grantees' selection of performance measures, however, will depend on the technologies being deployed and what problem(s) they are trying to solve. Careful thought should be given to the specific type of mobility benefits that are anticipated from the technology deployment. Preferred measures include travel time, average speed, and travel time reliability (TTR). While TTR is important to travelers, there is no consensus within USDOT on how to measure it, so this document does not recommend a specific measure. Standard deviation of travel time (or travel time index) is the most common method for measuring TTR, but variance or other measures may also be used. The least preferred measure is vehicle volume or throughput as it does not directly measure mobility benefits. In developing the list of suggested PMs for measuring ATCMTD mobility impacts (see Table 8), the TPM measures described in the National Performance Management Measures: Assessing Performance of the National Highway System, Freight Movement on the Interstate System, and Congestion Management and Air Quality Improvement (CMAQ) Program rule were incorporated.5 It is anticipated that grantees will be collecting the data to measure mobility/congestion benefits through field tests (i.e., new data collection), and possibly through modeling or simulation. Surveys may provide a complementary source of data on user experience or satisfaction, but surveys should not be a substitute for field test data. In most cases, the performance measures can be used at the intersection, corridor, or regional level, and it is important to consider geographic scope when developing performance measures. For technologies deployed at intersections, grantees should consider measuring impacts both at the intersection AND the corridor or regional level, as the impacts may differ (i.e., the problem may have shifted from one intersection to another location). Time of day should also be taken into account. In cases where mobility impacts are anticipated to be greatest during peak hours, the performance measures should focus on those peak hours.
For evaluations related to signalized control (including adaptive signal systems), specific performance measures which capture the ability of the control mechanism to respond to traffic and improve mobility should be considered.
Many of these performance measures (and delay and speed measures) can be automatically produced using automated traffic signal performance measure (ATSPM) software. Data from modern traffic controllers can be analyzed using ATSPMs, significantly easing the burden of analysis and visualization for some studies. FHWA promoted ATSPMs as part of the fourth iteration of Every Day Counts (EDC-4). Through a pooled-fund effort, open-source software was developed which can take controller log information and automatically produce a wide variety of performance measures and create visualizations and statistics using those data. Several States have implemented these systems, with Utah DOT among the early adopting agencies (see Utah DOT's ATSPM website: https://udottraffic.utah.gov/atspm/). Reduce Environmental ImpactsWhen evaluating environmental impacts, the Program Level Report objectives include reducing transportation-related emissions. Analysis should include applicable mobile-source emissions of regulated pollutants that are known to have adverse public health effects, namely ozone precursors—volatile organic compounds and nitrogen oxides—as well as carbon monoxide, and particulate matter (both PM10 and PM2.5) and the applicable precursors from transportation sources. Reductions in energy consumption and carbon dioxide equivalent could also be reported. Chapter 4 provides information about models and tools that can be used for emissions and energy measurement. Additionally, the References section on Emissions and Energy Measurement provides links and useful resources.
Optimize Multimodal PerformanceGiven the complex nature of our transportation systems, it is challenging to define and measure optimized multimodal performance. Below are a few suggested performance measures, including both quantitative and qualitative measures that provide insight on whether the system is progressing towards more optimal multimodal performance.
Improve Access to Transportation OptionsAccessibility (or access) can have multiple meanings. While the FAST Act does not explicitly define what it means by access to transportation options, this is typically interpreted as the existence of physical access to goods, services, and destinations (i.e. transportation) and/or the ease of reaching goods, services, activities, and destinations. Access can be measured from the supply side (does the system provide access) as well as the demand side (do users have access (or ease of access) to transportation options?). Table 11 presents a range of measures related to improved access to transportation options, as defined above. The selection of performance measures will depend on the technologies being deployed and what problem(s) they are trying to solve. A number of the measures are specific to transit; however, others may apply across a range of transportation options, so the evaluation team will need to tailor the performance measure to their specific deployment.
Effectiveness of Providing Real-Time Integrated Transportation Information to the Public to Make Informed DecisionsWhile there has been quite a bit of research conducted on advanced traveler information systems (ATISs), there is no standard set of performance measures that is used to measure the effectiveness of these information systems. Typically, research has relied on counting the number of users and/or surveying users to understand the characteristics of their use (e.g., when, how often, types of information sought, etc.), their satisfaction with the system, and the impacts of the ATIS on their travel behavior. For projects that are providing the public with real-time integrated traffic, transit, and multimodal transportation information, use of the ATIS should be measured for all platforms (apps, website, kiosk, etc.). If possible, the types of information that users are accessing should be automatically recorded, along with other aspects of use, such as time of day and amount of time spent accessing the information. These data will provide useful insights; however, they will need to be supplemented with user surveys to understand the effectiveness of the ATIS. The table below provides suggested performance measures.
Cost Savings and Improved Return on InvestmentCost savings may be measured in a variety of ways and the measures depend on the technology being deployed. This may be measured directly in dollars; if measured in time (e.g., staff time) it can be converted to dollar savings. Return on investment can be measured through a benefit-cost analysis (see Chapter 4 for more information).
Other Benefits/Lessons LearnedAs needed, ATCMTD grantees should develop additional PMs that measure anticipated benefits that are not captured in the PMs presented in this chapter. Measures of other benefits may be quantitative or qualitative in nature. At a minimum, any surveys or interviews that are conducted should include an open-ended question that asks if there are "any other benefits" of the deployment (e.g., in addition to the safety and/or mobility benefits). In addition, grantees should measure "lessons learned" from their deployments. While surveys may be used for this purpose, it is recommended that evaluation teams conduct at least a few interviews with key project stakeholders to gather lessons learned data. Interviews provide rich, qualitative data, and allow the interviewer to probe for more detailed information. Finally, for new and emerging technologies, there may be additional measures that are not captured in the performance areas described above, but that are nonetheless important to measure—for example, user experience and/or acceptance. A few example PMs for automated vehicle technologies are provided below (separately for riders and onboard controllers or maintenance staff): Riders:
Onboard controllers or Maintenance staff:
Performance Measure ReferencesBeaulieu, M. et al. (2014). WSDOT's Handbook for Corridor Capacity Evaluation, Olympia, WA: Washington State Department of Transportation, obtained from: http://wsdot.wa.gov/publications/fulltext/graynotebook/CCR14_methodology.pdf Easley, R. et al. (2017). Freight Performance Measure Primer, Report No. FHWA-HOP-16-089 Washington, D.C., obtained from: https://ops.fhwa.dot.gov/publications/fhwahop16089/index.htm Federal Highway Administration. (2017). National Performance Management Measures to Assess System Performance, Freight Movement, and CMAQ Program, Washington, D.C., obtained from: https://www.federalregister.gov/documents/2017/01/18/2017-00681/national-performance-management-measures-assessing-performance-of-the-national-highway-system Federal Highway Administration. (2019). Transportation Performance Management, obtained from: https://www.fhwa.dot.gov/tpm/, last accessed March 2019. Krugler, P. et al. (2006). Performance Measurement Tool Box and Reporting System for Research Programs and Projects, College Station, TX: National Cooperative Highway Research Program obtained from: http://www.trb.org/Publications/Blurbs/159957.aspx National Cooperative Highway Research Program. (2008). Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability, Washington, D.C., obtained from: https://www.nap.edu/catalog/14167/cost-effective-performance-measures-for-travel-time-delay-variation-and-reliability National Transportation Operations Coalition. (2005). Performance Measurement Initiative, Final Report, Washington, D.C., obtained from: https://transportationops.org/publications/performance-measurement-initiative-final-report Research and Technology Coordinating Committee. (2012). Identifying Potential Performance Measures for FHWA RD&T, Washington, D.C., obtained from: https://www.nap.edu/read/22816/chapter/3 Shaheen, S., Cohen, A., Yelchuru, B. Sarkhili, S. (2017). Mobility on Demand Operational Concept Report, Report No. FHWA-JPO-18-611, Washington, D.C., obtained from: http://innovativemobility.org/wp-content/uploads/Mobility-on-Demand-Operational-Concept-Report-2017.pdf Smith, S., Bellone, J., Bransfield, S., Ingles, A., Noel, G., Reed, E., & Yanagisawa, M. (2015). Benefits Estimation Framework for Automated Vehicle Operations, Report No. FHWA-JPO-16-229, Cambridge, MA, obtained from: https://rosap.ntl.bts.gov/view/dot/4298 Taylor, R. (2010). PennDOT ITS Evaluations and Activities Final Report, Report No. FHWA-PA-2010-001-060908, Camp Hill, PA, obtained from: http://www.dot7.state.pa.us/BPR_PDF_FILES/Documents/Research/Complete%20Projects/Planning/ITS%20Evaluations%20and%20Activities.pdf Vasconez, K. (2010). Traffic Incident Management (TIM) Performance Measurement Knowledge Management System, Report No. FHWA-HOP-10-011, Washington, D.C., obtained from: https://ops.fhwa.dot.gov/publications/fhwahop10011/tim_kms.pdf 1 In cases where USDOT or other Federal guidance was not available, new measures were designed. [ Return to Note 1 ] 2 See https://www.fhwa.dot.gov/tpm/guidance/safety_performance.pdf for guidance documents on the Safety Performance Management Final Rule. [ Return to Note 2 ] 3 Secondary crashes refer to the number of additional crashes—starting from the time of detection of the primary incident—either within the incident scene or its queue, including the opposite direction, resulting from the original incident (Vasconez 2010). [ Return to Note 3 ] 4 Grantees may also consider the use of exposure-adjusted rates for pedestrian or bicyclist measures (e.g., change in bicycle crashes per 1000 cyclists); however, since many agencies do not routinely capture the relevant exposure data, it may require a special data collection effort during both the baseline and post deployment periods. [ Return to Note 4 ] 5 See https://www.fhwa.dot.gov/tpm/rule.cfm [ Return to Note 5 ] 6 Mobility measures described above, such as travel time, average speed, delay, etc. could also apply to freight. In addition, see FHWA's Freight Performance Measure Primer. [ Return to Note 6 ] 7 Incident clearance time is defined by the span of time (in minutes) between the first recordable awareness of an incident by a responsible agency and the time at which the last responder has left the scene (Vasconez 2010). [ Return to Note 7 ] 8 This metric is used for Transportation Conformity analyses and for the CMAQ Total Emissions Reduction Performance Measure. [ Return to Note 8 ] 9 Use U.S. Energy Information Administration (EIA) to obtain Btu or kJ per gallon of diesel or gasoline. [ Return to Note 9 ] |
![]() |
||
![]() |
United States Department of Transportation - Federal Highway Administration |