Office of Operations
21st Century Operations Using 21st Century Technologies

Evaluation Methods and Techniques: Advanced Transportation and Congestion Management Technologies Deployment Program

Chapter 4: Methods and Analytic Techniques

This chapter includes three sections: Benefit-Cost Analysis (BCA), Survey and Interview Methods, and Emissions and Energy Estimates.

Benefit-Cost Analysis

This section provides an overview of BCA and how it might be applied to ATCMTD evaluations. The ATCMTD program requires analysis of "deployment and operational costs of the project compared to the benefits and savings the project provides." Although different methodologies might be used for measuring these impacts, the preferred method is BCA because it provides a comprehensive accounting using a well-established analytical approach.

BCA is systematic process by which the impacts of a project (or other action) are estimated and quantified through a comparison of the benefits from a project, as they accrue both to direct users and to society as a whole, against project costs over a specified time period. Conducting BCA as part of a project evaluation serves three primary purposes:

  • Accountability. BCA allows diverse project outcomes to be compared and evaluated using a consistent measure.
  • Knowledge Transfer. A BCA provides useful insight and information on costs and benefits that may be used by other cities considering similar projects.
  • Improved Future Analyses. These analyses will help improve and aid calibration of the expected benefits and costs, particularly from innovative technologies, for future ex ante BCAs. This in turn will support well informed decision-making on future transportation projects.

In outlining goals, objectives, and performance measures, grantees can address return on investment by incorporating BCA as the analytic method (see Table 3 in the Evaluation Overview chapter for an example). In cases where grantees are deploying a range of different technologies and may not have sufficient resources to conduct separate BCA analyses for each technology, they can prioritize, focusing their BCA on the technology(ies) that are central to their overall deployment.

Completing the BCA will ordinarily be one of the final steps in project evaluation, as it requires synthesizing a variety of outcome measures from other elements of the evaluation, such as impacts of the project on traffic flow and safety. This also allows for up-to-date cost data to be included in the analysis, including any expected operational or maintenance costs.

This section is intended to provide a brief overview of BCA. Additional detail and USDOT guidelines on BCA methodology (within the context of discretionary grant programs) may be found in the "Benefit-Cost Analysis Guidance for Discretionary Grant Programs" (2018). Updates are generally published annually, and the most recent version should be referenced when designing and conducting BCA analysis. In addition to insight into the methods for BCA analysis, the guidance also provides values for use in monetizing several categories of benefits.1 Nonetheless, many ATCMTD projects may have benefits (or in some cases, costs) that are difficult to quantify or monetize. In these cases, it is useful to present the impacts in as much detail as possible and assess the benefits qualitatively. For example, it may be difficult to place a monetary value on improved transit service updates, but the BCA could describe the level of usage of the system and provide qualitative information on how users value the information.

Goals of a High-Quality Benefit-Cost Analysis

A high-quality BCA should have the following characteristics:

  • The analysis should be comprehensive, and include all benefits and costs attributable to the project, to the extent possible.
  • The data and forecasts used should be reliable.
  • The parameters used (e.g., monetization factors, discount rate, analytical timeframe) should be appropriate.
  • The project impacts should be compared to a credible baseline.
  • The analysis should include an assessment of uncertainty. This may include sensitivity analysis around key parameters, data, or forecasts. Alternatively, the analysis may simply note areas of uncertainty.
  • The analysis should be transparent and replicable, as demonstrated through a clear description of all assumptions, inputs, and modeling methods.

When reporting their BCA findings, ATCMTD grantees should clearly identify the assumptions used in the analysis, the estimation methods and data sources used, and any uncertainties remaining in the analysis (supported with sensitivity analysis results when feasible). Results should include:

  • Benefits, ideally broken down by major impact category (e.g. safety, mobility) and project element
  • Costs by major project element
  • Benefit-cost ratio
  • Net present value

Useful BCA Tools:

  • California Department of Transportation's Life-Cycle Benefit-Cost Analysis Model (Cal-B/C)
  • Federal Highway Administration's Tools for Operations Benefit-Cost Analysis (TOPS-BC)

In cases where the ATCMTD project consists of a number of distinct sub-projects or elements, it is useful to calculate the BCA results separately for each. In the interest of transparency, it is strongly recommended that any documentation of the results include a copy of the completed BCA tool or spreadsheet used.2

When specialized models are used to calculate project impacts, it may not be possible to provide fully transparent documentation, but a summary of the modeling inputs and calculation methods can help to improve the credibility of the BCA.

Defining Benefits

ATCMTD project evaluators will need to identify the relevant set of benefits to be included in the BCA. Some of the most common benefit categories for transportation projects are listed in the table below. Benefit estimation requires that benefits be quantified (e.g., person-hours of delay avoided, gallons of fuel saved) and then for those estimates to be monetized into dollar terms, if they are not already. Monetization factors reflect the societal value of resources and can be based on market prices (such as retail fuel costs) where relevant. For non-market impacts that are more difficult to value, such as improved health and safety, USDOT has established recommended monetary values.

Table 14. Common Benefit Categories.
Benefit Type Goal Measurement and Example Units
Safety User Benefit Improve safety Fatalities and injuries avoided3 (counts)
Travel Time Savings User Benefit Efficiency Reduction in travel time (person-hours)
Vehicle Operating Cost User Benefit Reduced operating cost Reduction in auto miles traveled4 (vehicle-miles)
Induced Travel User Benefit Increased consumer surplus for additional use/users in response to higher level of service (LOS) Additional trips (count)
Facility Maintenance Agent Benefit State of good repair/Reduce maintenance and operating costs Change in maintenance costs (dollars)
Reduced Emissions Externality Reduce negative health and environment impacts from vehicle emissions Kilograms per day (kg/day) by pollutant

Monetizing project benefits is a key step in making benefits comparable across benefit categories, across time, and between different projects. Some project benefits will not be able to be monetized, and will require a qualitative assessment of their benefit to users or society. A qualitative assessment may be due to:

  • A lack of available data – for example, it may not be feasible to collect data on reduced transportation network company (TNC) wait times resulting from curb demarcation of a TNC drop off/pick up location at a transit station.
  • No established methodology for monetizing benefits – for example, a project may collect data on increased use of and satisfaction with a real-time transit app following an improvement, but the project team may not have an established or reasonable means of valuing the improved information available to users.

Guidelines for use and valuation of common benefit categories may be found in the "Benefit-Cost Analysis Guidance for Discretionary Grant Programs" (2018). A summary of key benefits categories is provided below. See Appendix C for USDOT values used in monetizing these categories of benefits.

  • Safety: USDOT guidance provides monetized values for reductions in fatalities, injuries, and property damage only accidents. USDOT safety statistics generally utilize KABCO levels, which measure the observed injury severity at the crash scene. Maximum Abbreviated Injury Scale (MAIS) coded values may be found in the Discretionary Grant Program Guidance. MAIS categorizes injuries along a six-point scale from Minor to Not Survivable. Either scale may be used as long as the values are applied consistently.
  • Travel Time Savings: USDOT guidance provides recommended value of time estimates by purpose. When using these estimates, the analyst should multiply the value by the appropriate vehicle occupancy rate (1.39 for passenger vehicles and 1.00 for commercial trucks). Local values may be used where available, and values for transit travel and wait times should be based on the most accurate available data applicable to the project.
  • Vehicle Operating Cost (VOC): These costs are comprised of costs associated with operating the vehicle (fuel, maintenance, maintenance, etc.), and exclude fixed costs. Additionally, VOC excludes transfers (e.g., State and Federal fuel excise taxes are not included in VOC). USDOT provides standard values, but local values may be substituted where available.
  • Reduced Emissions: Monetized values for emission reductions can be found in Table 23 of Appendix C. The recommended methodology for estimating emission reduction may be found in the section on Emissions and Energy Measurement.

Typical Cost Categories

  • Initial capital costs of development and installation
  • Recurring operations and maintenance costs
  • Recapitalization costs for replacement of equipment according to anticipated lifespans.

Defining Costs

The cost side of the BCA should include all costs that are expected to be incurred over the lifecycle of the project, as measured relative to the base case in which the project does not take place. Costs should be included irrespective of the entity by whom they are paid. For cost elements with a lifespan beyond the analytical period of the BCA, a residual asset value may be calculated as an offset to costs.

General Principles

Below are a number of general principles regarding BCAs.

Analysis Period

The analysis period would ideally correspond to the development and implementation period (including project construction) plus the expected service life of the facility or equipment being installed as part of the project. An analysis period of 20-30 years plus the development and implementation period is typical for highway and transit projects. However, a shorter period may be appropriate for projects involving ITS or other technologies, as this equipment generally has shorter service lives. Some ATCMTD projects may have innovative technologies for which well-established operational lifetimes do not exist, in which case the BCA should use the best available estimates with sensitivity testing of alternative values.

When the project includes assets with differing lifespans, the BCA should include costs for replacement of shorter-lived assets during the analysis period. Conversely, assets with remaining useful life at the end of the analysis period can be assigned a residual value in the final year.5

Adjust to Real Dollars

  • Use the Gross Domestic Product Deflator for converting past expenditures
  • Do not adjust for expected inflation in future years
Inflation

It is recommended that the BCA keep all monetary values in real rather than nominal terms, with the base year of the analysis period being a reasonable choice of reference point. In practice, this means any costs or values in earlier dollars should be adjusted to the base year. Likewise, for ex-post BCA analysis, costs and benefits that are measured in nominal dollars should be adjusted so that they are in real (base year) dollars.

Discounting

Benefit and cost values that occur in different years of the BCA should be discounted to adjust for the time value of money. ATCMTD projects should follow the guidance of OMB Circular A-94, which recommends discounting future benefits and costs using an annual real discount rate of seven percent. The Circular has additional detail on the rationale for discounting and the origins of the seven percent figure.

Double Counting and Transfers

Two common and related errors when preparing a BCA are double counting benefits (i.e., two measurement methods are applied to a single source of economic benefit) and including movement of money which is a transfer rather than a change in economic value (e.g., tolls or transit fares are not included in BCAs, as these are transfers).

Choice of Base Case

The benefits and costs under evaluation in BCA are always relative to an alternative. Under ex-post analysis, the alternative will be the counterfactual "no-build" scenario in which the current project did not occur. These "no-build" conditions are fundamentally unobservable, and require thoughtful development of the expected conditions which would have occurred in the absence of the project. Depending on the nature of the project, the no-build case could include assumptions about:

  • VMT growth
  • Travel times/speeds
  • Transit ridership
  • Changes in crash exposure and severity (e.g., due to exogenous changes to the vehicle fleet)

Before/after studies are a common method for estimating the impact of a project relative to baseline conditions. However, concerns related to potential confounding factors or regression to the mean, should be noted, and, if possible, addressed using controls or additional modeling.

A control may be a useful tool in establishing a plausible "no-build" counterfactual. In ex-post BCA observing a control intersection, corridor, or region (as applicable) allows the analyst to control for confounding factors. This may include regional changes in travel patterns (e.g., a decrease in travel to the central business district of a city), larger macroeconomic trends (e.g., a recession leading to a decrease in VMT), or changes in vehicle safety (e.g., a trend towards safer cars reducing the severity of accidents).

In addition to constructing a plausible "no-build" base case, it may add insight and value for future projects if the BCA includes analysis using a counterfactual baseline in which the conventional elements of the project are completed. In essence, this may be used to identify the benefits which accrue from deploying innovative technology alone. For example, a project that expands bus service and installs transit signal priority (TSP) might be compared against an expansion of bus service without the TSP component. Analysis of this nature should be conducted in addition to the primary BCA which uses a plausible "no-build" baseline.

Example: Adaptive Signal Control

If deploying ASC at a set of intersections, mobility benefit calculations generally need to be made on the corridor as a whole, as travel time savings at those intersections could be offset (or enhanced) by other changes in the corridor.

Geographic Scope

The BCA analysis should consider the expected geographic impact of the facility, as improvements may affect traveler route choice. An MPO travel demand model, if available, may provide some insight into the origin-destination patterns of travelers using the new facility.

Additionally, the geographic scope of the analysis should be sufficient to capture as many of the primary and secondary effects of the project as possible. This generally results in expanding the geographic scope beyond the immediate deployment area (see example on adaptive signal control).

Mode Shift and Induced Demand

Increased demand for transportation services following a level of service improvement can come from several sources, including mode shifts (e.g., commuters switching from transit to cycling due to a new bike path), route changes (e.g., transit riders switching from a parallel bus line to a new bus rapid transit line), or induced travel (e.g., an auto traveler making a recreational trip to a central business district that would not have been made without the introduction of a new high occupancy toll lane).

For travelers switching from one mode to another, the BCA analysis considers the benefits derived from the new mode, rather than the avoided costs of the prior mode. Induced travel within the same mode represents new trips that were not valued highly enough to be made under earlier conditions, but were made following facility improvements. As such, they represent a smaller consumer surplus than that for other trips. In practice, BCAs use the rule of one half, in which benefits from induced demand are valued at half the level of benefits to existing users.

Issues Specific to ATCMTD Project BCAs

Below are some issues specific to ATCMTD projects that are relevant to BCAs.

Value of Travel Time Information

Advanced traveler information systems can help travelers adjust their routes, departure time, travel mode, or other trip characteristics to avoid delays. In these cases, the benefits may be measured conventionally, such as through the change in travel time and vehicle operational costs. However, prior research suggests that travelers also place a value on real-time information even when they do not make specific changes to their journeys in response to the information received. High-quality information can allow travelers to adjust future plans, notify others of their estimated arrival time, or even simply provide "peace of mind" benefits from knowing what to expect. There are a range of potential benefits that the evaluation team will want to measure (i.e., through surveys and/or interviews) as part of the overall evaluation. These benefits should be presented qualitatively in the BCA unless there are willingness-to-pay estimates that are supported by methodologically rigorous studies of consumer valuation. While it may not be possible to incorporate these measures directly into a BCA, the findings may support other areas of an evaluation.

Travel Time Reliability

It is widely recognized that transportation system users value reliability of travel times in addition to valuing reductions in average travel time. However, there is no consensus method or established practice for quantifying this benefit, and USDOT has not established recommended monetary values. Changes in the distribution of point-to-point travel times are sometimes presented as a change in the variance, standard deviation, or other metric. It can also be reasonable to use the idea of "buffer time"—i.e., the difference between the mean travel time and a benchmark level used in travel planning, such as the 95th percentile—to approximate the impacts on traveler decision-making. Given the range of approaches to measuring reliability impacts and the lack of standardized monetary values, it is recommended that reliability benefits be included in the BCA as a qualitative, non-monetized value.

Option Value and Resiliency

Travelers and freight operators are generally better off when they have access to multiple means of travel, and may place a value on these options even when they are not used. For example, captive automobile commuters, those who do not have access to any alternative modes of transportation, have a more limited set of choices available to them than travelers with access to transit and ridesharing services. Additionally, a larger set of transportation options can increase resiliency of the transportation system by providing alternatives when a particular mode or route is disrupted. These benefits would generally be included qualitatively in the BCA due to the lack of well-established methods for valuing these impacts.

Survey and Interview Methods

This section outlines considerations and methods related to surveys and interviews. Based on the evaluation questions that are identified during evaluation planning (see Chapter 2), the evaluation team determines if surveys or interviews are an appropriate method for collecting the necessary data. For technology deployments (e.g., ATIS, CV applications, etc.), surveys or interviews can be used to gather information from the users of the technology regarding their experiences and satisfaction with the technology, as well as impacts of the technology on travel behavior or attitudes. Surveys or interviews are also a useful tool for gathering qualitative data from project team members or other stakeholders regarding the benefits, challenges, and lessons learned of the technology deployment. Ideally, survey or interview data provide a complement to other objective data that are collected from infrastructure or from the technology itself. However, for some evaluation questions, where no other data sources are available, surveys or interviews may provide the only source of data for a particular evaluation question.

For ATCMTD projects that involve surveys, interviews, or other qualitative methods, it is highly recommended that grantees utilize staff with expertise in the field of evaluation and survey/interview design and methods. In addition to surveys and interviews, other qualitative methods may be appropriate, such as focus groups or workshops. Table 15 describes these methods and provides considerations in using each.

Table 15. Summary of Qualitative Methods.
Method Description Considerations in Using the Method
Surveys Utilizes a systematic method to collect quantitative and/or qualitative measures of an individual's experiences, attitudes, behavior, etc.
  • Enables the collection of individual level data from a larger number of people.
  • Provides data on non-observable traits such as users' characteristics, attitudes, experiences, or perceptions.
  • If probability sampling is used - enables the generalization of findings from the sample to a larger population (See section on Sampling below).
Interviews Utilizes a structured interview guide (typically with open-ended questions) to gain detailed insight on experiences, behavior, attitudes, and opinions.
  • Provides more in-depth, detailed information (e.g., lessons learned).
  • Enables probing and follow-up, which can be useful if the topic is less well defined or if a deeper understanding of attitudes, behavior, etc. is needed.
Focus Groups or Workshops Utilizes a group setting to collect qualitative feedback from multiple individuals.
  • Enables the collection of information from multiple stakeholders at the same time.
  • Enables "give and take" among the participating individuals and may allow for participants to coalesce around certain ideas or conclusions.

The remainder of this section provides best practices on the following aspects of survey/interview development and administration:

  • Target population
  • Survey design
  • Survey administration mode
  • Sampling
  • Recruitment
  • Questionnaire design
  • Response Rates
  • Privacy and Personally Identifiable Information
  • Other Considerations

Example Target Population for Transit CV Application

  • Bus drivers (use/benefit from the technology)
  • Riders (benefit from the technology)
  • Agency personnel/other project stakeholders (experience deploying and maintaining technology)

Target Population

For technology deployments, the evaluation team will want to consider the population(s) who are impacted by the technology or who can provide feedback on the technology; this may include multiple populations (see Example to the right). The evaluation questions that have been developed will help define the target population. If possible, the perspectives of different relevant populations should be collected.

Survey Design

The evaluation questions that are identified during the evaluation planning process will determine the appropriate design or approach for the surveys and/or interviews. For example, if the evaluation questions revolve around users' experience and satisfaction with a technology, your survey should be conducted following deployment of the technology (a post-deployment survey only). However, if the evaluation questions involve a measure of change—perhaps understanding the change in users' behavior or attitudes as a result of using a particular technology, the most robust design is a pre-post or before-after design, whereby the same questions are asked in both the pre- and post-deployment periods.

By conducting surveys in both the pre- (baseline) and post-deployment periods, it is possible to compare measurements over time. However, if a control group is not used,6 it becomes important to track potential confounding factors (e.g., changes in the economy, construction, etc.) which may be the cause for a change in the measure (rather than the deployment). The evaluation team may not be able to quantitatively measure the impacts of the confounding factors, but at a minimum the confounding factors should be noted in any report of findings.

Advantages to Panel Design (same individuals surveyed pre- and post-):

  • Individual acts as his/her own control, since key attributes of the individual will not change from the pre- to post-period.
  • Can measure change at the individual level as well as in the aggregate.

If pre-post surveys are being used, the grantee should consider a panel design, whereby the same individuals are surveyed in both the pre- and post-deployment periods.

However, if resources do not allow for both a pre- and post-deployment survey, it is also possible to ask respondents (in a post-deployment survey only) if they perceived a change in their attitudes, behavior, etc. due to the technology. This method is not ideal because it is more likely to lead to bias in the survey responses (i.e., problems with recall, positivity bias), but it offers an alternate option for grantees who are not able to conduct surveys in both the baseline and the post deployment periods.

Table 16. Survey Design Examples.
Example Evaluation Topics Design
  • Characteristics of technology use (e.g., frequency of use)
  • User satisfaction with different aspects of the technology
  • Attitudes about the technology
Post-Deployment Survey
  • Changes in attitudes, behavior resulting from use of the technology
  • Pre-Post Design (most robust)
  • Post-Deployment Survey only (i.e., retrospective questions on perceived changes)

Survey Administration

The nature of the specific project (including WHO is being surveyed) will influence and may even dictate the mode (or method) that is used to collect the survey information. Surveys may be administered online, in-person, by mail, or by telephone. Table 17 highlights each of these modes (mail and telephone are included for reference, but are not likely to be used for ATCMTD projects).

Multiple modes may be used for the same survey effort—either during different stages of the survey (recruitment vs. survey method) or to reach different sub-populations, as appropriate. For example, for a technology being deployed at an intersection to improve pedestrian safety, an in-person intercept may be used for recruitment, and then respondents may be asked to complete the survey online. During the intercept, the interviewer would briefly explain the purpose of the project, obtain the respondent's agreement to participate, and collect their contact information.

Table 17. Survey Administration Modes.
Method Considerations Example Uses
Online survey (including app-based)
  • Convenient; participants can complete at their convenience
  • Streamlines survey process (i.e., with skip patterns)
  • Response tends to be lower compared to in-person surveys, but with an engaged population this may not be a concern
  • Developing a sample of eligible participants can be expensive if there is no readily available sampling frame
  • If a panel design is used, need to assign respondents unique IDs to link responses across multiple surveys
  • Survey programming required
  • Some populations (e.g., seniors) may not have online access
  • Advanced traveler information system users
  • Connected vehicle users
In-person
  • paper
  • tablet
  • Response rates are higher, relative to other methods
  • If paper surveys are used: greater burden on respondents to follow directions, skip patterns, etc.; Reponses will need to be coded into a database.
  • Tablets streamline the survey process, but need a sufficient number so respondents are not waiting to complete survey
  • Tablets require survey programming
  • Survey transit users onboard the bus
  • Survey truck operators at their fleet barn, rest stops
Mail
  • Requires mailing addresses
  • Response rates tend to be lower
  • Requires follow-up contacts (e.g., reminder postcard) to increase response rates
  • No programming required, but responses must be coded
  • Adaptive signal control improvements in a corridor (i.e., sample corridor addresses)
Telephone
  • Requires telephone numbers
  • Response rates are lower, due to phone screening, caller ID
  • Phone system programming required (Computer Aided Telephone Interview System)
No Value

Sampling

Sampling Frame vs. Sample

Sampling frame: The list or procedure that defines your population

Sample: the individuals (or units) that are drawn from the sampling frame for inclusion in your survey (who may or may not choose to participate)

As part of the survey design process, the evaluation team will need to develop the sampling frame from which the sample of respondents is drawn. For some technology deployments, it may be appropriate to survey all members of the population (i.e., no sampling). For example, if CV technology is being deployed in 60 fleet vehicles, the evaluation team may survey all drivers of the instrumented fleet vehicles. In other cases, such as the deployment of a publicly available ATIS, it is not feasible to survey all potential users, so a sample is drawn from the population. A list of users (a sampling frame) may be available (e.g., Toll pass customers, Transit Pass Riders, etc.); but in other cases, there is no available sampling frame, and the evaluation team will need to be creative in developing its sample. If a pre-existing list or online panels are used, the evaluation should consider any biases or limitations to the list (e.g., accuracy, completeness).

In general, there are two key types of sampling: probability and non-probability. With probability sampling, each individual has a known, non-zero probability of being randomly sampled, and the sample findings can be generalized to the larger population. With non-probability samples, individuals are selected (rather than sampled)—either for a reason due to the research (purposive) or because they are easy to access (convenience). While the findings cannot be generalized to the larger population, non-probability samples can nonetheless yield useful insights.

Sample Size

The evaluation team will need to determine the appropriate sample size for the survey effort. For probability samples, the sample size is calculated using a standard formula that is based on several factors, including the population size, the desired confidence interval (margin of error), the confidence level, and the standard of deviation in the responses. As a rule of thumb, a sample of 375 to 400 responses will generally be sufficient to enable you to say with 95% confidence that your sample statistic (the estimate from your survey) is within 5% (plus or minus) of the true proportion in the overall population. If greater precision in the survey estimates is needed or if there is a need to analyze sub-samples, the sample size will need to be increased. For non-probability samples, it is more difficult to determine sample sizes, but the evaluation team should determine the subgroups of interest and ensure that there are a sufficient number of responses for each subgroup. Teams are encouraged to collect as many responses as their budget allows; sub-groups with less than 50 responses should be interpreted with extreme caution.

Recruitment

The recruitment procedures should be tailored to the study population and standardized so that the same protocols are being used across all respondents. A set of screening criteria should be developed to ensure that only qualified participants are selected. Common methods include in-person recruitment, phone recruitment, or online panel recruitment (e.g., online panels). On the following page is some guidance for recruitment.

Recruitment Best Practices


  • Keep the recruitment process simple for respondents.
  • Be clear on any requirements for participation (e.g., must have a valid driver's license), and ensure there is some mechanism for verifying that the potential respondent has met the requirements. A screener questionnaire may be needed to determine a person's eligibility to participate in the study.
    • For example, if a technology is being deployed along a corridor, you may need a screener question to identify drivers who traverse the corridor on a regular basis (e.g., at least three weekdays per week during peak hours).
  • Try to obtain a diverse (or representative) sample, particularly with respect to demographics that may be related to a user's experience or satisfaction with the technology.
    • For example, diversity by age and income is typically important. If your screener questionnaire includes questions on age and income, you can monitor these characteristics of the sample during recruitment.
  • For panel surveys, when setting recruitment targets, over-recruit to allow for the fact that participants will drop out, for any number of reasons (which may or may not be related to the study). While it is difficult to estimate what the dropout rate will be (in part it depends on the nature and requirements of the survey), it is reasonable to assume that at least 20% to 30% of recruited participants may dropout at some point during the survey period.
  • For certain populations, such as transit operators or truck drivers, recruitment may need to occur through fleet managers. If this is the case, establish buy-in from the fleet manager and provide them with scripts (e.g., that should accompany the survey invitation) and encourage them to use the standardized protocols developed for the evaluation.

Questionnaire Design

Questionnaires should be designed to capture the specific performance measures and related data elements identified in your evaluation plans, but they may also include additional questions that are needed for analysis purposes (and do not explicitly measure a performance measure). For example, demographic questions, or questions related to a respondent's typical use of a corridor may be needed in order to better interpret the survey responses and to provide context for understanding the key performance measures. If different populations are being surveyed, tailor the questionnaires to each population, as needed (i.e., according to the evaluation questions). For example, if surveying bus drivers and riders, there may be questions that are appropriate to one population and not the other. To the extent possible, however, the same or similar questions should be asked across survey populations.

Questionnaire Design Best Practices
(Including but not limited to):


  • Avoid questions that are biased or leading.
    • Example biased question: To what extent do you agree that traffic congestion is a major problem?
  • Ask one question at a time; avoid double barreled questions.
    • An example of a double-barreled question: How satisfied or dissatisfied are you with the timing and quality of the traffic alerts?
  • For scaled questions (e.g., level of agreement, extent of satisfaction, etc.):
    • Ensure the scales are balanced (e.g., same number of positive and negative points).
    • Be aware that maximum reliability is 5 to 7 points (neutral point is included if 5 or 7).
    • Label all points of the scale.
    • Use consistent language in your scales.
  • Group similar questions together; think about the flow of questions.
  • Use skip patterns as appropriate, so respondents can skip questions that are not applicable.
  • For online as well as paper surveys, pay attention to how the questions are formatted. Proper formatting can make survey completion easier on the respondent and can reduce errors.
  • Pre-test your questionnaire to ensure respondents understand the questions, the response categories are complete, etc.

Response Rates

The evaluation team should utilize steps to maximize response rates. For probability samples, a high response rate enables the evaluation team to more confidently generalize from their sample to the larger population. If response rates are low, however, non-response error is a concern. Non-response error occurs when non-respondents in the sample (e.g., people who were sampled but did not complete a survey) differ from respondents in ways that are germane to the survey topic; as a result, the sample findings are not representative of the population.

For non-probability samples, a high response rate is similarly important to ensuring that the findings reflect the attitudes, behavior, etc. of the full pool of participants (rather than a subset). Response rates should be included in any write-up of the findings, and if the response is low, the findings should be interpreted with caution.

Methods for Improving Response Rates


  • In any initial contact with potential (or recruited) participants, explain the importance of the survey and how the resulting data will be used; if respondents understand the value of the information, they may be more likely to participate.
  • Make the survey process as easy as possible on the participant.
  • Use multiple reminder/follow-up contacts to encourage survey completion.
  • Consider a small incentive as a means of increasing participation, particularly for surveys that involve participation over a period of time (i.e., pre-deployment and post deployment.)
    • Consider incentives that are appropriate to the target population. For example, if you are surveying transit users, you could provide a free one week transit pass.

Privacy and Personally Identifiable Information

What is PII?

Information that can be used to distinguish or trace an individual's identity, either alone or when combined with other personal or identifying information that is linked or linkable to a specific individual.

For some survey designs, it may be necessary to collect personally identifiable information (PII) from respondents, particularly if the evaluation team plans to survey respondents over time and needs to contact them (i.e., to send survey invitations, reminders, etc.). In such cases, the evaluation team needs to ensure that it protects the respondents' PII by keeping this information in a separate file from the survey responses. Anonymous IDs can be assigned to each respondent to link responses across surveys and to track survey response. When the survey has been completed, however, any files with PII should be destroyed. In addition, in any initial contacts with respondents, the evaluation team should briefly explain how it plans to protect the respondents' PII.

For interviews, the evaluation team needs to consider what level of privacy is required in its reporting of the findings and it needs to convey this information to the interviewees. For example, if external stakeholders are being interviewed, will they be identified by name or organization or some other grouping?

Institutional Review Board

For research involving human subjects, the evaluation team should obtain the approval of an Institutional Review Board (IRB). For this process, the evaluation team will need to complete an application and will need to provide the IRB with all survey-related materials, including the questionnaire, any initial contact notification, reminder notifications, etc. During the planning stages, the evaluation team should contact the IRB to confirm that IRB approval is required. If it is required, the evaluation team will need to build time into its schedule for an IRB review.

Other Considerations

Below are a few additional considerations regarding surveys and interviews:

  • Be sensitive to language barriers for non-English speakers. Your survey population may include people who do not speak and/or write English, and as a result they may be less likely to complete the surveys due to language barriers. If any of the participants are non-English speakers, it is important to be sensitive to how feedback will be gathered from this group. In geographies with a large number of non-English speakers, the evaluation team will want to consider translating the questionnaire into one or more languages.
  • Provide respondents with a mechanism for providing ad hoc feedback on the technology. In addition to collecting feedback via "active" methods, such as surveys or interviews, ATCMTD grantees should consider providing a passive method, such as a feedback form on its website portal. In this way, participants, can share their thoughts and feedback at any time. If such a feedback mechanism is offered, the evaluation team must ensure that respondents are aware of it.

Emissions and Energy Estimates

This section outlines methods and considerations related to quantifying emissions and energy for ATCMTD projects. Emissions and fuel consumption impacts can be quantified either by: 1) direct measurement using portable emissions monitoring systems (PEMS) and real-time fuel flow meters, 2) by using mobile-source emissions models such as the United States Environmental Protection Agency's (USEPA) Motor Vehicle Emission Simulator (MOVES) or the California Air Resources Board's (CARB) Emission Factors (EMFAC) model; or other tools such as the FHWA Congestion Mitigation and Air Quality (CMAQ) Emissions Calculator Toolkit. To quantify any emissions or energy impacts associated with a project, a net difference in emissions and fuel consumption must be taken between the baseline conditions (i.e., conditions before project deployment) and the deployment conditions (i.e., after deployment).

Directly measuring emissions and fuel consumption is a time- and cost-intensive process, so ATCMTD projects may not conduct direct emissions and/or fuel measurements. The alternative is to quantify emissions and fuel consumption benefits through some form of modeling. For the best emissions and energy modeling estimates, incorporating local fleet and activity data is recommended.

On-Road Emissions Models and Tools

There are a number of models and tools developed by federal and State governments to evaluate on-road emission and fuel reduction benefits. This section describes three relevant emissions models and tools including MOVES, CARB's EMFAC Model, and the CMAQ Emissions Calculator Toolkit.

MOVES

USEPA's MOVES is a state-of-the-science emission modeling system that estimates emissions for mobile sources at the national, county, and project level. USEPA provides MOVES technical documentation, user guides, manuals, and training for developing SIPs, transportation conformity, and hot-spot analysis.

EMFAC

The EMFAC emissions model is developed and used by CARB to assess emissions from on-road vehicles including cars, trucks, and buses in California. EMFAC can also be used to estimate fuel consumption. Similarly, CARB supplies technical documentation, handbooks, and user guides for using EMFAC in various applications.

CMAQ Emissions Calculator Toolkit

FHWA has developed a series of tools to provide technical support and resources for the CMAQ Program. FHWA has undertaken the initiative of developing a series of spreadsheet-based tools to facilitate the calculation of representative emission benefits. These tools do not currently estimate fuel consumption benefits.

Even if the CMAQ tools themselves cannot be used, some ATCMTD grantees may find the methodologies utilized in the toolkit useful in evaluating emissions and fuel consumption impacts for their proposed projects. Each tool has associated documentation that details the methodology and MOVES modeling run specifications.

Methods of Evaluation

Vehicle emissions and fuel consumption, like many other traffic parameters, can be either directly measured or modeled using the most accurate input data available. Assessing emissions and fuel consumption depends highly on the project and its intended outcomes. Decision criteria for whether to measure or model should include time, cost, and quality (or precision) needed. Direct measurements are expensive and time-consuming but can yield superior quality—and less uncertainty—to modeling. However, emissions and fuel use modeling should be sufficient for most if not all ATCMTD projects. It is important to note there are different degrees of modeling. Not all projects will require high-precision modeling with extensive local fleet and activity input data. Some projects may simply need to quantify a decrease in vehicle miles traveled (VMT) or operating hours. The following sections describe direct measurement and modeling in more detail. For flexibility, there is a measurement approach and two tiers of modeling—simple and advanced—explained in more detail below.

Direct Measurement Evaluation

This approach will require emissions that are monitored using Portable Emissions Monitoring Systems (PEMS) and direct the monitoring of fuel consumption. An example of a project that would utilize this approach would be a vehicle-to-infrastructure (V2I) communications project where emissions and fuel consumption would be measured without the V2I technology implemented (i.e., baseline scenario or no-build scenario) and then be compared to measured emissions and fuel consumption with the V2I technology implemented (i.e., project scenario or build scenario). A more specific case could involve traffic signal prioritization of a transit bus. A transit bus would transmit its approach to a traffic signal at an intersection and the light cycle would be adjusted to allow the transit bus priority. This V2I project would reduce the red-light time, which would reduce the overall idling time of the transit bus.

Emissions Inventory Evaluation – Simple

A simple emissions inventory approach for evaluating ATCMTD projects would be similar to what is currently done for evaluating some CMAQ projects. For this approach, the ATCMTD project can determine if any of the currently available CMAQ tools could be utilized to evaluate emissions benefits. If the CMAQ tools are not sufficient for evaluating the ATCMTD project then composite emission rates aggregated by pollutant and fuel consumption rates (i.e., representing the national fleet) can be obtained by conducting a national-scale MOVES run to assist with the evaluation.

An example of a simple emissions inventory evaluation would be a project that results in a VMT reduction. Composite emissions rates on a mass of pollutant emitted per mile basis (i.e., usually in grams/mile or kilograms/mile) can be multiplied by the expected VMT reduction to obtain the overall estimated emissions benefit.

Emissions Inventory Evaluation – Advanced

An advanced emissions inventory approach would utilize either collected vehicle telematics data and/or conduct traffic microsimulation modeling to develop detailed drive schedules or operating mode distributions as an input for MOVES or EMFAC. Users could then estimate the potential benefits by finding the difference in emissions and fuel consumption inventories between the baseline and project deployment scenarios.

Examples of ATCMTD projects utilizing an advanced emissions inventory approach would include technology deployments such as cooperative adaptive cruise control (CACC) where the second-by-second changes to the vehicle trajectories are known. CACC deployments are likely to result in improved traffic flow and less braking, which would lead to subsequent emission reductions and fuel savings. The following documents (see References at the end of this chapter) showcase projects that have utilized advanced approaches to determine driving behavior changes at a high frequency for estimating benefits of connected and automated vehicles:

  • Benefits Estimation Model for Automated Vehicle Operations Phase 2 Final Report
  • A Framework for Evaluating Energy and Emissions Impacts of Connected and Automated Vehicles Through Traffic Microsimulations
  • Meta-Analysis of Adaptive Cruise Control Applications: Operational and Environmental Benefits
  • Comparing Performance of Cooperative and Adaptive Cruise Control Field Tests, and
  • Applications for the Environment: Real-Time Information Synthesis- eco-signal operations modeling report.

Methods and Analytic Techniques References

BCA References

Bai, Y. & Kattan, L. (2014). Modeling Riders' Behavioral Responses to Real-Time Information at Light Rail Transit Stations, Calgary, Alberta: Transportation Research Board, obtained from: https://www.researchgate.net/publication/279284285_Modeling_Riders'_Behavioral_Responses_to_Real-Time_Information_at_Light_Rail_Transit_Stations.

Office of Management and Budget. (1992). OMB Circular No. A-94: Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs, Washington, D.C., obtained from: https://www.transportation.gov/regulations/omb-circular-94.

Office of Management and Budget. (2003). OMB Circular No. A-4: Regulatory Analysis, Washington, D.C., obtained from: https://www.transportation.gov/regulations/omb-circular-no-4-0.

Song, D., He, X. & Peeta, S. (2014). Field Deployment to Quantify the Value of Real-time Information by Integrating Driver Routing Decisions and Route Assignment Strategies, NEXTRANS Project No. 058PY03, West Lafayette, IN, obtained from: https://www.purdue.edu/discoverypark/nextrans/assets/pdfs/058PY03%20Final%20Report.pdf.

Transportation Economics Committee, Transportation Research Board. (no date). Transportation Benefit-Cost Analysis, Washington, D.C., obtained from: http://bca.transportationeconomics.org/.

USDOT Office of the Secretary. (2018). Benefit-Cost Analysis Guidance for Discretionary Grant Programs, Washington, D.C., obtained from: https://www.transportation.gov/office-policy/transportation-policy/benefit-cost-analysis-guidance.

Victoria Transport Policy Institute. (2016). Transportation Cost and Benefit Analysis. Techniques, Estimates and Implications, Victoria, BC, obtained from: https://www.vtpi.org/tca/, last accessed November 9, 2018.

Survey and Interview Methods References

Dillman, D. A., Smith, J. D., & Christian, L. M. (2014). Internet, Phone, Mail and Mixed-Mode, Fourth Edition. Hoboken: John Wiley & Sons.

Groves, R. M., Fowler, Jr, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology, Second Edition. Hoboken: John Wiley & Sons, Inc.

Marsden, P. V., & Wright, J. D. (2010). Handbook of Survey Research, Second Edition. Bingley: Emerald Group Publishing Limited.

Emissions and Energy Measurement References

California Air Research Board. (2018). Mobile Source Emissions Inventory—Categories, Sacramento, CA, obtained from: https://ww2.arb.ca.gov/our-work/programs/mobile-source-emissions-inventory.

Federal Highway Administration, Office of Planning, Environment, & Realty. (2018). CMAQ Emissions Calculator Toolkit, Washington, D.C., obtained from: https://www.fhwa.dot.gov/environment/air_quality/cmaq/toolkit/.

Eilbert, A., Berg, I., & Smith, S. (2019). Meta-Analysis of Adaptive Cruise Control Applications: Operational and Environmental Benefits, Report No. FHWA-JPO-18-743, Cambridge, MA, obtained from: https://rosap.ntl.bts.gov/view/dot/41929.

Eilbert, A., Chouinard, A.M., Tiernan, T. & Smith, S. (2019). Comparing Performance of Cooperative and Adaptive Cruise Control Field Tests, Orlando, FL: 2019 Automated Vehicle Symposium.

Eilbert, A., Jackson, L., Noel, G. & Smith, S. (2018). A Framework for Evaluating Energy and Emissions of Connected and Automated Vehicles Through Traffic Microsimulations, Washington, D.C.: Transportation Research Board 97th Annual Meeting.

Federal Highway Administration, Office of Planning, Environment, & Realty. (2018). CMAQ Emissions Calculator Toolkit, Washington, D.C., obtained from: https://www.fhwa.dot.gov/environment/air_quality/cmaq/toolkit/.

Smith, S. et al. (2018). Benefits Estimation Model for Automated Vehicle Operations: Phase 2 Final Report, Report No. FHWA-JPO-18-636, Cambridge, MA, obtained from: https://rosap.ntl.bts.gov/view/dot/34458.

United States Environmental Protection Agency. (MOVES2014b). MOVES and Other Mobile Source Emissions Models, Washington, D.C., obtained from: https://www.epa.gov/moves, last accessed: September 23, 2019.

Yelchuru et al. (2014). AERIS—Applications for the Environment: Real-time Information Synthesis, Eco-Signal Operations Modeling Report, Report No. FHWA-JPO-14-185, Washington, D.C.: Intelligent Transportation Systems Joint Program Office, obtained from: https://rosap.ntl.bts.gov/view/dot/3537.

Yelchuru et al. (2015). AERIS—Applications for the Environment: Low Emissions Zones Operational Scenario Modeling Report, Report No. FHWA-JPO-14-187, Washington, D.C.: Intelligent Transportation Systems Joint Program Office, obtained from: https://rosap.ntl.bts.gov/view/dot/3538.

1 Local values based on sound empirical data or models may be used where available, except where noted. [ Return to Note 1 ]

2 Cal-B/C: https://dot.ca.gov/programs/transportation-planning
TOPS-BC: https://ops.fhwa.dot.gov/plan4ops/topsbctool/ [ Return to Note 2 ]

3 Reductions in property damage only accidents are often included with safety benefits, as they tend to rely on the same data sources and are impacted by the same transportation improvements. [ Return to Note 3 ]

4 Some facility improvements may reduce the per mile vehicle operating costs. For example, paving a dirt road may reduce maintenance and tire replacement costs for users. [ Return to Note 4 ]

5 Depreciation formulas can be found in USDOT guidance (see References). The residual value is at the end of the period of analysis and should be appropriately discounted. [ Return to Note 5 ]

6 With a control group, individuals who do not receive the treatment (e.g., are not exposed to the CV technology, the new traveler information application, etc.) are also surveyed before and after the deployment. Presumably, there is no change in their attitudes, behavior etc. over time, which confirms that any change measured in the treatment group is in fact due to the treatment. [ Return to Note 6 ]

Office of Operations