Office of Operations
21st Century Operations Using 21st Century Technologies

Evaluation Methods and Techniques: Advanced Transportation and Congestion Management Technologies Deployment Program

Chapter 2: Evaluation Overview

Why Evaluate?

An evaluation is a systematic assessment of how well a project or program is meeting established goals and objectives. Evaluations involve collecting and analyzing data to inform specific evaluation questions related to project impacts and performance.1 This performance information enables project managers to:

  • Report progress and make improvements, as necessary, to ensure the achievement of longer-term impacts
  • Assess and communicate the effectiveness of new technologies

Evaluations can be used at different points in the project lifecycle. For example, some evaluations are conducted during implementation to assess whether a technology is operating as planned, while others are conducted post-implementation to assess the outcomes and impacts of a technology. Figure 1 shows where ATCMTD evaluation activities fit in the project lifecycle. During the pre-implementation phase, as the project design is underway, evaluation planning must also be conducted. The remainder of this chapter describes these key evaluation planning activities. During the implementation phase, as the technology is being tested and fully implemented, the data collection methods should also be tested and any baseline data collection should be completed (baseline data also may have been collected during pre-implementation). Once the technology has been implemented, post deployment data are collected for the duration of the evaluation period. Grantees should report interim as well as final evaluation/performance measurement findings in their Annual Reports (see Appendix B for Annual Report template).

project lifecycle activities, evaluation activities, and ATCMTD outputs
Figure 1. Graphic. Project Lifecycle2

ATCMTD evaluations can largely be characterized as outcome evaluations. Outcome evaluations focus on whether a program or project has achieved its results-oriented objectives. However, the ATCMTD grantees should consider ways to measure interim progress toward their outcomes. Early measurement will inform interim improvements, as necessary, and also provide input into the required Annual Reports that document the benefits, costs, and effectiveness (among other measures) of the technologies being deployed.

Evaluations should be systematically planned and executed to ensure findings are credible and actionable. The remainder of this section describes this systematic approach to an evaluation. When planning evaluations, constraints that may impact the ability to conduct evaluation activities should be taken into account. In particular, evaluations should consider the financial and staff resources available for the assessment.

Assembling an Evaluation Team

Independent evaluators bring:

  • Objectivity
  • Technical expertise

Help ensure the results are:

  • Credible
  • Unbiased

The first step in conducting a project evaluation is assembling an evaluation team. Evaluations can be conducted using an internal evaluation team, independent evaluators, or a mix of both. Evaluators should be brought on board as early as possible so that the design of the evaluation can occur as the deployment is being planned and the project generates sufficient data to support the evaluation. Given the reporting requirements in the FAST Act, it is recommended that an independent evaluator be used to design and manage ATCMTD evaluations.

Due to the complex nature of ATCMTD systems and technologies, evaluators should work closely with the ATCMTD project team.3 Evaluators should have regular access to the project team members who are implementing the technology and collecting the data. The project team should set up regular opportunities for the evaluators to work with data providers during and after the data collection period. Data issues are common, and it's best to troubleshoot these issues collaboratively.

Evaluation Planning Process

Developing an evaluation plan puts grantees in the best position to identify and collect the data needed to assess the impacts of their ATCMTD technology deployments. This plan is a blueprint for the evaluation; it includes the specifics of the evaluation design and execution, as well as a description of the project and its stakeholders. Table 1 describes the activities involved in evaluation planning and execution, each of which will be discussed in this chapter. Several templates are also included to assist grantees in structuring and documenting their evaluation and performance measurement plans.

Table 1. Evaluation Planning and Execution.
Phase Activities
Evaluation Planning Set evaluation goals and objectives
Develop evaluation questions
Identify performance measures
Develop evaluation design
Develop data management procedures
Design analysis plan
Evaluation Execution Test data collection methods
Acquire or collect data
Analyze data and draw conclusions
Develop Annual Reports

Set Evaluation Goals/Objectives

Guiding an evaluation is an agreed upon set of project goals and objectives to drive the evaluation design. These goals and/or objectives should represent the core of what the project is trying to achieve. A logic model can be a helpful tool for evaluation teams to use as they identify goals, objectives, and related information needs. A logic model is a systematic and visual way to present and share your understanding of the relationships among the project resources, the planned activities, and the changes or results that the project hopes to achieve. In short, a logic model illustrates how the program's activities can achieve its goals. A logic model generally includes: resources or inputs, activities, outputs, outcomes, and impacts (see Figure 2).

graphic of a project or program logic model
Figure 2. Graphic. Project or Program Logic Model4

Additional details on logic models can be found at the following link:
W.K. Kellogg Foundation: https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide.

ATCMTD project goals align with the priorities established in the FAST Act. These priorities relate to the use of advanced transportation technologies to improve safety, mobility, environment, system performance, and infrastructure return-on-investment.

Table 2 includes some of the priority goal areas listed in the FAST Act (i.e., as described in 23 U.S.C. 503(c)(4)(F) and 23 U.S.C. 503(c)(4)(G), which outline the requirements for the Annual Reports and the Program Level Reports, respectively), along with potential objectives that should be considered in the development of project goals/objectives (see Chapter 3 for a set of recommended performance measures for each goal area).

Table 2. ATCMTD FAST Act Goals and Objectives.
Goal Area Objectives
Improve safety
  • Reduce traffic related fatalities
  • Reduce traffic related injuries
  • Reduce traffic crashes
Reduce congestion/Improve mobility
  • Reduce traffic congestion
  • Reduce travel delay
  • Improve travel time, speeds, or travel reliability
Reduce environmental impact
  • Reduce transportation-related emissions
Optimize system performance
  • Optimize multimodal system performance
  • Optimize system efficiency
Improve access to transportation alternatives
  • Improve access to transportation alternatives
Improve effectiveness of real time integrated transportation information
  • Provide the public with access to real-time integrated traffic, transit, and multimodal transportation information to make more informed travel decisions
Reduce costs/Improve return on investment (ROI)
  • Provide cost savings to transportation agencies, businesses, and the traveling public
  • Demonstrate that benefits outweigh the costs
Share institutional or administrative benefits
  • Develop Lessons Learned and Recommendations for future deployment strategies
Other benefits
  • Provide other benefits to transportation users and the general public

Develop Evaluation Questions

Once goals and objectives have been established, specific research questions (or hypotheses) can be developed. These questions will be addressed through data collection, analysis, and interpretation. There should be at least one (and ideally several) evaluation questions in support of each goal. When designing evaluation questions, consider the following guidance:

  • Design questions that are specific about the change in safety, system performance, agency efficiency, behavior, etc. that is expected as a result of the project activity.
  • Avoid using polar questions (i.e., yes-no response).
  • Address one aspect of performance with each question; use multiple evaluation questions rather than a few general ones.
  • Use simple, straightforward language.

Generally, evaluation questions indicate, either explicitly or implicitly, a desired outcome or impact (e.g., reduced traffic crashes, improved travel time reliability, etc.). If the desired outcome or impact is not achieved, however, the evaluation should describe the actual results and address reasons (or potential reasons) that may account for the difference between the desired and the actual results.

Table 3 provides a template for how to organize evaluation goals, objectives, and questions (a limited set of examples are included for descriptive purposes only).

Table 3. Template with Example Evaluation Goals, Objectives, and Evaluation Questions.
Note: Examples are included for illustrative purposes only.
Goal Area Objective Evaluation Question
Improve Safety Reduce Traffic crashes To what extent has connected vehicle (CV) application X reduced traffic crashes along corridor Y?
What proportion of drivers using CV application X rated the safety warnings as helpful?
Reduce Congestion/Improve Mobility Improve travel times What impact did Adaptive Signal Control have on travel times along corridor Y?
Improve Effectiveness of Real-Time Integrated Transportation Information Provide the public with access to real-time integrated traffic, transit, and multimodal transportation information to make more informed travel decisions Did a majority of application users indicate that the travel time information helped improve their commute decision-making?
Cost Savings and Improved Return on Investment Provide cost savings to transportation agencies What was the benefit-cost ratio of the adaptive signal control deployment?
Share Institutional Insights Lesson Learned What lessons learned did project managers identify to facilitate future successful deployments of CV?

Identify Performance Measures

As grantees develop their evaluation questions, it is important to begin identifying the performance measures or information that will address each evaluation question. The performance measures will be used to assess whether improvements and progress have been made on the safety, mobility, environmental, and other goal areas of the ATCMTD Program (as described in the Fast Act).

In developing performance measures:

  • Determine if the information needed is qualitative or quantitative in nature.
  • To the extent possible, select quantitative measures that can be monetized for use in benefit-cost analysis (see Chapter 4 on benefit-cost analysis for more information).
  • Ensure that the data necessary for the measures can be collected (or otherwise acquired).

Chapter 3 provides additional guidance on performance measures, including recommended measures specific to fulfilling the requirements set forth in the FAST Act.

Develop Evaluation Design

While identifying the evaluation questions and performance measures, grantees should also be developing an appropriate evaluation design that describes how, within the constraints of time and cost, they will collect data that addresses the evaluation questions. This process entails identifying the experimental design, the sources of information or methods used for collecting the data, and the resulting data elements.

Experimental Design

The experimental design frames the logic for how the data will be collected. Evaluations of technology deployments often utilize a before-after design, whereby pre-deployment data (i.e., baseline data) is compared to data that are collected following the deployment of the technology. For certain evaluation questions, however, it may be appropriate to collect data only during the "after" period. For example, for measures related to user satisfaction with a technology, the design could include surveys only in the post-deployment period.

More robust designs, such as randomized experimental and quasi-experimental designs, utilize a control group that does not receive the "treatment" of a program's activities to account for potential confounding factors (see Data Limitations or Constraints for more information on confounding factors). The same data collection procedures are used for both the treatment and control groups, but the expectation is that the hypothesized outcome (improved, safety, mobility, etc.) occurs only within the treatment group and not the control group.

Evaluation designs are applied to the different methods or information sources (see next section) that are utilized in the evaluation.

Data Collection Methodology

The evaluation team should consider the appropriate method(s) for addressing each of their evaluation questions. For any given evaluation question, there may be multiple methods used to address it. For example, agency efficiency evaluation questions may include an analysis of agency operations data, as well as qualitative interviews with agency personnel. The same method may be used to address multiple evaluation questions. Vehicle field test data (e.g., CV data) may be used to inform both mobility and safety-related evaluation questions.

When developing data collection methods, thought should be given to the specific data elements that will be gathered from each method, and whether those data elements meet the needs of the evaluation (e.g., address the evaluation questions, are available in the units required for the performance metric, etc.). Data elements will be either quantitative or qualitative, and can take many forms (e.g., speed data, crash data, survey responses, interview responses, etc.).

Table 4 highlights examples of key methods, their data sources, and data collection considerations for each method.

Table 4. Examples of Data Collection Methods.
Information Sources/Method Data Sources Data Collection Considerations
Field Test
  • Roadside infrastructure (sensors, DSRC, etc.)
  • Vehicle probes (e.g., CV or AV data)
  • Field test location/scope
  • Data collection period
  • Data elements to be collected, including unit of analysis
  • Data collection frequency/interval (hourly, daily, etc.)
  • Data requirements related to modeling or simulation (if applicable)
  • Data management (e.g., storage, quality control)
  • Data security (e.g., protecting privacy)
Surveys or Interviews5
  • Survey responses
  • Interview responses
  • Target population and sampling procedures
  • Participant recruitment/contact procedures
  • Expected sample size
  • Methods for encouraging survey response
  • Survey administration period
  • Key topics to be addressed in survey and/or interview guides
Internal Agency Data
  • Information management systems
  • Operations data (e.g., response times, system downtime, maintenance data), website tracking, reports, documents, etc.
  • Data collection period
  • Data elements to be collected, including unit of analysis
  • Frequency/interval (hourly, daily, etc.)
  • Accuracy/completeness of internal agency data

Data Limitations or Constraints

Example Confounding Factors:

  • Changes in travel demand
  • Weather
  • Traffic incidents
  • Construction
  • Changes in gas prices
  • Changes in the economy (e.g., loss or growth in jobs)
  • Changes in legislation

For each evaluation question, it is important to consider any limitations or constraints that may affect your ability to collect the data or may affect the data collected. Examples of constraints include:

  • Technology functionality problems,
  • Low survey participation,
  • Poor agency documentation, and
  • Limited data collection period.

Identifying ways to mitigate these data limitations or constraints will enhance the ability to collect useful data.

The evaluation team also should consider whether there are confounding factors that may impact the evaluation and should track such factors for the duration of the evaluation. A confounding factor is a variable that completely or partially accounts for the apparent association between an outcome and a treatment. Confounding factors are usually external to the evaluation; hence, they may be unanticipated or difficult to monitor. If grantees are using a before-after design without a control (i.e., a non-experimental design), it is particularly important to consider potential confounding factors that may be the cause of a change in the before-after data. Grantees should avoid attributing a change in outcomes to the technology deployment when in fact it is due to some other factor. Potential mitigation approaches should also be identified for each confounding factor.

As grantees are thinking through the key components of their evaluation, including the evaluation questions, performance measures, data sources, data collection methodology, and data limitations, it is recommended that they document this information in the Evaluation Plan. The following template (see next page) is designed to provide grantees with a useful tool for summarizing this evaluation information.

Table 5. Example Methodology Template.
Note: Examples are included for illustrative purposes only
Evaluation Questions/Hypotheses Performance Measure Information Source/Method Data Element Limitations/Constraints
What proportion of drivers using CV application X rated the safety warnings as helpful? Percent of respondents who feel safety warning was helpful Survey Survey response in post-survey
  • Low response rate may be an issue
What impact did adaptive signal control have on travel times along Corridor Y? Percent change in average travel times Field test (vehicle probe data) Pre-post comparison of vehicle probe data
  • Weather, incidents may affect measurement
What lessons learned did project managers identify to facilitate future successful deployments of CV? Lessons Learned Interviews Responses to questions about lessons learned
  • Findings for one project may not generalize to other locations
What was the benefit-cost ratio of the adaptive signal control deployment? Net present value Benefit-cost analysis Monetized estimates of project impacts
  • Incomplete data
  • Some impacts are difficult to quantify

For projects where data collection location, frequency, etc. may vary across the different technologies being deployed, it may be useful to document these data collection characteristics or procedures. See Table 6 below, which includes an example for illustrative purposes only.

Table 6. Template for Data Collection Procedures.
Data Element Data Collection Frequency/Interval Location Data Collection Period Data Collection Responsible Party
Start End
Traffic Volumes 5 minutes US 75 corridor January 1, 2019 March 31, 2019 NJDOT

Develop Data Management Procedures

In most cases, grantees will be collecting significant amounts of data to support their evaluation and operations, and there are a number of data-related issues that need to be considered during evaluation planning. Management of data collected during the ATCMTD project may be documented in the Evaluation Plan but grantees are strongly encouraged to develop a separate data management plan (DMP) during the pre-implementation phase which describes how the project team will handle data both during and after the project. This DMP can be updated with more information as the project proceeds.

In planning for data management, grantees should consider how data will be captured, transferred, stored, and protected. The evaluation team will need to work closely with the project team to ensure that these protocols are put in place prior to the data collection period. Data management protocols include:

  • Processes to log and transfer data to the evaluation team
  • Data quality control procedures (e.g., data cleaning, etc.)
  • Standards used for data and metadata format and content
  • Plans for data storage/archiving
  • Plans for data documentation (e.g., data dictionary)
  • Responsibilities of data manager
  • Data protection procedures
  • Data access and sharing

Grantees must provide USDOT the results of their evaluation via their Annual Reports required by the FAST Act (for template, see Appendix B) and this should be reflected in their DMPs. Although not required, USDOT encourages grantees to make other relevant data available to the USDOT and the public to further advance the objectives of the ATCMTD program. For example, projects may provide the USDOT access to the underlying data used to determine the costs and benefits described in the report. The DMP should indicate whether project data contains confidential business information and personally identifiable information (PII), whether such data will be shared in a controlled access environment, or removed prior to providing public or USDOT access.

Additional voluntary guidance on creating DMPs can be found at the following link: https://ntl.bts.gov/public-access/creating-data-management-plans-extramural-research.

Design Analysis Plan

Grantees are encouraged to develop an analysis plan that describes how the evaluation data are going to be organized and analyzed. The analysis plan may be documented as a section of the Evaluation Plan, in the DMP, or a separate document.

The analyses must be structured to answer the questions about whether change occurred and whether these changes can be attributed to the deployment. During evaluation planning, the evaluation team must determine the types of analyses that it plans to conduct (e.g., statistical procedures), so that the evaluation can be designed to produce the required data. For each of the evaluation questions, the evaluation plan should provide sufficient detail on how the data will be analyzed.

Since evaluation data may come from multiple sources, e.g., experimental design (field-tests), surveys, interviews, historical data, etc., different types of analyses may be used in an evaluation. Analysis methods may include descriptive statistics and statistical comparisons, as well as qualitative summaries and comparisons (e.g., based on interview data). Modeling or simulation may also be used as analytic methods.

Execute the Evaluation Plan

Executing the evaluation includes the collection of the data, the analysis of the data, and the development of findings.

Acquire or Collect Data

During data collection, the project team is capturing the data that have been identified in the evaluation plan. As detailed in previous sections, this may include system performance data, vehicle or infrastructure data, and survey responses, among other data elements.

Pilot Studies

Prior to the start of data collection, it is advisable to conduct a data collection pilot that tests the end-to-end data collection pipeline, particularly for new systems or tools (i.e., where there is no previously established data collection mechanism). For example, for Automated or Connected Vehicle projects involving the collection of vehicle data, the pilot test should include logging data in its final format, offloading the data from the technology/vehicles/equipment, processing it, and transmitting it to where the evaluators will use it. Evaluators should be part of this feedback loop to make sure that the data are acceptable, including providing feedback on the format of sample data sets prior to the end-to-end test. In addition to a pilot study (that tests the data collection protocols), system acceptance testing should also be conducted, whereby the project team assesses whether or not the technology functions as designed.

For projects involving surveys, a pilot involves testing the completed survey with a small set of respondents prior to the full launch. This will enable the project and evaluation teams to work through any issues with question regarding relevance or interpretability, survey length, or other problems (e.g., data coding, processing, and storage) prior to full survey launch. This ensures that once the data collection begins, the evaluators are confident that the data will meet their evaluation needs.

During the data collection pilot, complete data documentation should be generated to accompany the data. This is a general best practice but particularly important if a third party evaluator will be conducting the evaluation, staff turnover may occur on the project, or data will be made available to others down the road. At a minimum, data documentation should include:

  • Data dictionaries, including definitions of each data element, enumeration codes, units, default values, etc.
  • Contextual descriptions of the data from each source (e.g., How was this data collected and why might someone want to use the data in this table).

Where possible, grantees should leverage insights from previous projects, including USDOT-funded intelligent transportation systems (ITS) research, to determine the right data formats and documentation to support evaluation. For example, data and documentation from past and current ITS research projects can be found through the USDOT's ITS DataHub at https://www.its.dot.gov/data/.

Analyze Data and Draw Conclusions

Data analysis techniques and methods will vary greatly, depending on the evaluation design and the type of data that is collected. For all deployments, however, the analyses must be structured to answer two questions:

  1. Did the desired changes (i.e., in safety, mobility etc.) occur?
  2. If changes occurred, were they the result of the deployment?

During evaluation planning, the evaluation team must determine the types of analyses that it plans to conduct (e.g., statistical procedures), so that the evaluation can be designed to produce the required data.

Develop Annual Report(s)

The FAST Act requires that grantees submit Annual Reports. This Evaluation Methods and Techniques document provides guidance on how to structure an evaluation that will produce the data needed to meet this reporting requirement. According to the FAST Act (23 U.S.C. 503(c)(4)(F)), "For each eligible entity that receives a grant under this paragraph, not later than 1 year after the entity receives the grant, and each year thereafter, the entity shall submit a report to the Secretary that describes -

  1. Deployment and operational costs of the project compared to the benefits and savings the project provides; and
  2. How the project has met the original expectations projected in the deployment plan submitted with the application, such as -
    1. Data on how the project has helped reduce traffic crashes, congestion, costs, and other benefits of the deployed systems;
    2. Data on the effect of measuring and improving transportation system performance through the deployment of advanced technologies;
    3. The effectiveness of providing real-time integrated traffic, transit, and multimodal transportation information to the public to make informed travel decisions; and
    4. Lessons learned and recommendations for future deployment strategies to optimize transportation efficiency and multimodal system performance."

An Annual Report template has been designed to assist grantees in meeting their annual reporting requirement (see Appendix B). While evaluation-related activities are underway, grantees are asked to provide annual updates on their activities, organized by specific goal areas. In addition to a general summary of evaluation-related activities, these updates may include the status of baseline data collection (if applicable), data collection challenges, and evaluation milestones, among other information. Once data collection is completed, grantees are asked to report on their findings for each relevant goal area, and to note any particularly innovative or noteworthy findings. In order to collect information specified in the FAST Act, the template includes additional questions on how the project has met original expectations, a comparison of the benefits and costs of the project, lessons learned, and recommendations for deployment strategies.

Evaluation References

Administration For Children and Families Office of Planning, Research and Evaluation. (2010). The Program Manager's Guide to Evaluation Second Edition. Washington D.C.

Barnard, Y. (2017). D5.4 Updated Version of the FESTA Handbook. Leeds, UK: FOT NET Data.

Dillman, D. A., Smith, J. D., & Christian, L. M. (2014). Internet, Phone, Mail and Mixed-Mode, Fourth Edition. Hoboken: John Wiley & Sons.

Gay, K., & Kniss, V. (2015). Safety Pilot Model Deployment: Lessons Learned and Recommendations for Future Connected Vehicle Activities. Washington, D.C.: Intelligent Transportation System Joint Program Office.

Groves, R. M., Fowler, Jr, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology, Second Edition. Hoboken: John Wiley & Sons, Inc.

Marsden, P. V., & Wright, J. D. (2010). Handbook of Survey Research, Second Edition. Bingley: Emerald Group Publishing Limited.

Smith, S., & Razo, M. (2016). Using Traffic Microsimulation to Assess Deployment Strategies for the Connected Vehicle Safety Pilot. Journal of Intennigent Transportation Systems, 66-74.

W. K. Kellogg Foundation. (2004). Logic model development guide, (Figure 2. How to Read a Logic Model) Battle Creek, MI, obtained from: https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide

1 Evaluations commonly use evaluation questions or evaluation hypotheses to link project performance to goals and objectives. For simplicity, this document describes the use of evaluation questions. [ Return to Note 1 ]

2 Source: Volpe National Transportation Systems Center, United States Department of Transportation. [ Return to Note 2 ]

3 The "project team" refers to the team members involved in deploying the technology and may include staff from different organizations. The "evaluation team" refers to those who design and conduct the evaluation. [ Return to Note 3 ]

4 W. K. Kellogg Foundation (2004) [ Return to Note 4 ]

5 See Chapter 4 on Survey and Interview Methods. [ Return to Note 5 ]

Office of Operations