Congestion Pricing - Links to Congestion Pricing Home

Seattle-Lake Washington Corridor Urban Partnership Agreement National Evaluation Plan

3.0 National Evaluation Overview

This chapter summarizes how the national evaluation of the UPA sites is being organized and carried out and identifies the steps in the Seattle/LWC UPA evaluation process.

3.1 National Evaluation Organizational Structure

The evaluation of the UPA/CRD national evaluation is sponsored by the U.S. DOT. The RITA ITS JPO is responsible for the overall conduct of the national evaluation. Representatives from the modal agencies are actively involved in the national evaluation.

The Battelle team was selected by the U.S. DOT to conduct the national evaluation through a competitive procurement process. Members of the Battelle team include:
  • Battelle Memorial Institute – Prime;
  • Texas Transportation Institute (TTI), The Texas A&M University System;
  • Center for Urban Transportation Research (CUTR), University of South Florida;
  • Hubert H. Humphrey Institute of Public Policy and Center for Transportation Studies (CTS), University of Minnesota;
  • Wilber Smith Associates;
  • Eric Schreffler, ESTC; and
  • Susan Shaheen and Caroline Rodier, University of California, Berkeley.

As highlighted in Figure 3-1, the Battelle team is organized around the individual UPA/CRD sites. A site leader is assigned to each site, along with specific Battelle team members. The site teams are also able to draw on the resources of 4T experts and evaluation specialists.

The purpose of the national evaluation is to assess the impacts of the UPA/CRD projects in a comprehensive and systematic manner across all sites. The national evaluation will generate information and produce technology transfer materials to support deployment of the strategies in other metropolitan areas. The national evaluation will also generate findings for use in future federal policy and program development related to mobility, congestion, and facility pricing.

The focus of the national evaluation is on assessing the congestion reduction realized from the 4T strategies and the associated impacts and contributions of each strategy. The non-technical success factors, including outreach, political and community support, institutional arrangements, and technology will also be documented. Finally, the overall cost benefit analysis of the deployed projects will be examined.

Members of the Battelle team are working with representatives from the local partner agencies and the U.S. DOT on all aspects of the national evaluation. This team approach includes the participation of local representatives throughout the process and the use of site visits, workshops, conference calls, and e-mails to ensure ongoing communication and coordination. The local agencies are responsible for data collection, including conducting surveys and interviews. The Battelle team is responsible for providing the local partners direction on the needed data, formats and collection methods and for analyzing resulting data and reporting results.

Figure 3-1. Battelle Team Organizational Structure. Organization chart with a main vertical branch and a lateral branch. The main vertical branch is the COTM, FHWS Office of Operations. Under it is Project Manager, Battelle, as well as Principal Investigator and Deputy PM, Battelle. Below the branch has two subordinate elements. On the left is a box listing Site Leaders, leading down to Site-Specific Evaluation Teams. On the right is a box listing National Evaluation 4T Experts as well as Evaluation Specialists. The lateral branch is the U.S. DOT UPA Evaluation team, which leads into the COTM.

Figure 3-1. Battelle Team Organizational Structure

3.2 National Evaluation Process and Framework

The Battelle team developed a National Evaluation Framework (NEF) to provide a foundation for evaluation of the UPA/CRD sites. The NEF is based on the 4Ts congestion reduction strategies and the questions that the U.S. DOT seeks to answer through the evaluation. The NEF is essential because it defines the questions, analyses, measures of effectiveness, and associated data collection for the entire UPA/CRD evaluation. As illustrated in Figure 3-2, the framework is a key driver of the site-specific evaluation plans and test plans and will serve as a touchstone throughout the project to ensure that national evaluation objectives are being supported through the site-specific activities.

Figure 3-2. The National Evaluation Framework in Relation to Other Evaluation Activities. Activities are shown in five tiers, leading from the bottom to the top. The bottom tier is the National Evaluation Framework, which leads up to two items on the second tier. One is Site-Specific Evaluation Plans, which leads up to Data Collection and Analysis activities on the third tier, up to state evaluation reports on the fourth tier, and up to the National Evaluation Findings Report on the top tier. The other item on the second tier is Review Evaluation Plans, which leads to Miami Monitor and Support on the third tier, up to Miami Evaluation Report on the fourth tier, and into the National Evaluation Findings Report on the top tier.

Figure 3-2. The National Evaluation Framework in Relation to Other Evaluation Activities

The evaluation of each UPA/CRD site will involve several steps. With the exception of Miami, where the national evaluation team is serving in a limited role of review and support to the local partners, the national evaluation team will work closely with the local partners to perform the following activities and provide the following products:

  • a site-specific strategy guided by the NEF;
  • a site-specific evaluation plan that describes the strategy and provides a high-level view of all the test plans needed, the roles and responsibilities, and the schedule;
  • multiple site-specific test plans that provide complete details on how the data collection and analysis activity will be implemented;
  • collection of one year of pre-deployment and one year of post-deployment data;
  • analysis of the collected data; and
  • site-specific evaluation reports and a National Evaluation Findings Report.

The NEF provides guidance to the local sites in designing and deploying their projects, such as by identifying the need to build in data collection mechanisms if such infrastructure does not already exist. To measure the impact of the congestion strategies, it is essential to collect both the "before" and "after" data for many of the measures of effectiveness identified in the NEF. Also important is establishing as many common measures as possible that can be used at all of the sites to enable comparison of findings across the sites. For example, a core set of standardized questions and response categories for traveler surveys will be prepared. Questions may need to be tailored or added to reflect the specific congestion strategies and local context for each site, such as road names or transit lines, but striving for comparability among sites will be a goal of the evaluation.

A traditional "before and after" study is the recommended analysis approach for quantifying the extent to which the strategies affect congestion in the UPA/CRD sites. In the "before" or baseline condition, measures of effectiveness will be collected before the deployments become operational. For the "after" or post-deployment period, the same measures will be collected to examine the effects of the strategies. The analysis approach will track how the performance measures changed over time (trend analysis) and examine the degree to which they changed between the "before" and "after" periods. Whenever possible, field-measured data will be used to generate the measures of effectiveness.

3.3 U.S. DOT Four Questions and Mapping to 12 Analyses

Table 3-1 shows the four "Objective Questions" that U.S. DOT has directed the national evaluation team to address.9 The analyses present what must be studied to answer the four objective questions. Table 3-2 identifies the 12 evaluation analyses described in the National Evaluation Framework and shows how they related to the four objective questions. These analyses from the NEF form the basis of the evaluation plans at the UPA/CRD sites, including Seattle/LWC.

Table 3-1.? U.S. DOT National Evaluation "Objective Questions"
# Description
Objective Question #1 How much was congestion reduced in the area impacted by the implementation of the tolling, transit, technology, and telecommuting strategies?? It is anticipated that congestion reduction could be measured by one of the following measures, and will vary by site and implementation strategy:
  • reductions in vehicle trips made during peak/congested periods;
  • reductions in travel times during peak/congested periods;
  • reductions in congestion delay during peak/congested periods; and
  • reductions in the duration of congested periods.
Objective Question #2 What are the associated impacts of implementing the congestion reduction strategies? It is anticipated that impacts will vary by site and that the following measures may be used:
  • increases in facility throughput during peak/congested periods;
  • increases in transit ridership during peak/congested periods;
  • modal shifts to transit and carpools/vanpools;
  • traveler behavior change (e.g., shifts in time of travel, mode, route, destination, or forgoing trips);
  • operational impacts on parallel systems/routes;
  • equity impacts;
  • environmental impacts;
  • impacts on goods movement; and
  • effects on businesses.
Objective Question #3 What are the non-technical success factors with respect to the impacts of outreach, political and community support, and institutional arrangements implemented to manage and guide the implementation?
Objective Question #4 What are the overall costs and benefits of the deployed set of strategies?

 

Table 3-2.? U.S. DOT Objective Questions vs. Evaluation Analyses
U.S. DOT 4 Objective Questions Evaluation Analyses
#1 –?How much was congestion reduced? #1 – Congestion
#2 –?What are the associated impacts of the congestion reduction strategies? Strategy Performance
#2 –?Strategy Performance:? Tolling
#3 –?Strategy Performance:? Transit
#4 –?Strategy Performance:? Telecommuting/TDM
#5 –?Strategy Performance:? Technology
Associated Impacts
#6 –?Associated Impacts:? Safety
#7 –?Associated Impacts:? Equity
#8 –?Associated Impacts:? Environmental
#9 –?Associated Impacts:? Business Impacts
#3 –?What are the non-technical success factors? #10 – Non-Technical Success Factors
#4 –?What is the overall cost and benefit of the strategies? #11 –?Cost-Benefit Analysis

The analyses associated with Objective Question #2 are of two types. The first four analyses focus on the performance of the deployed strategies associated with each of the 4Ts. These analyses will examine the specific impacts of each deployed project/strategy, and, to the extent possible, associate the performance of specific strategies with any changes in congestion. The second type of analysis associated with Objective Question #2 focuses on specific types of impacts, e.g., "equity" and "environmental."

The 12 evaluation analyses were further elaborated into one or more hypotheses for testing. In some cases, where the analysis is not guided by a hypothesis, per se, such as the analysis of the non-technical success factors, specific questions are stated rather than hypotheses. Next, measures of effectiveness (MOEs) were identified for each hypothesis, and then required data for each MOE.

3.4 Seattle/LWC UPA National Evaluation Process

Figure 3-3 presents the Seattle/LWC UPA national evaluation team. The team includes U.S. DOT National Evaluation leader, the COTM, the U.S. DOT evaluation team, the FHWA point of contact, and the Battelle team. Representatives from the partnership agencies are involved in development of the UPA national evaluation.

Figure 3-3. Seattle/LWC UPA National Evaluation Team. Organization chart with a main vertical branch and three lateral branches. The main vertical branch is the COTM, FHWS Office of Operations. Under it is Project Manager, Battelle, followed by the Principal Investigator and Deputy PM, Battelle, followed by the Seattle Site Leader, and leading to the Seattle Site Evaluation Team, with members representing Tolling, Transit, Telecommuting/TDM, and Technology. The first lateral branch is the U.S. DOT UPA Evaluation team, which leads into the COTM. The second lateral branch is the Seattle Evaluation Point of Contact, which branches off from the COTM and leads into the Seattle Site Leader. The third lateral branch includes the National Evaluation 4T Experts as well as the Evaluation Specialists.

Figure 3-3. Seattle/LWC UPA National Evaluation Team

Figure 3-4 presents the process for developing and conducting the national evaluation of the Seattle/LWC UPA projects. The major steps are briefly discussed following the figure.

Figure 3-4. Seattle/LWC UPA National Evaluation Process. Flow chart indicating milestones, beginning with the Kick-off Conference Call, and proceeding through workshop, evaluation, test plan, data collection, and analysis and evaluation activities.

Figure 3-4. Seattle/LWC UPA National Evaluation Process

Kick-Off Conference Call. The kick-off conference telephone call, held on June 4, 2008, introduced the Seattle/LWC partners, the U.S. DOT representatives, and the Battelle team members. The Seattle/LWC UPA projects and deployment schedule were discussed, and the national evaluation approach and activities were presented. A PowerPoint presentation and various handouts were distributed prior to the conference call.

Site Visit and Workshop. Members of the U.S. DOT evaluation team and the Battelle team convened in Seattle on July 28 and 29, 2008. King County Metro provided bus a tour of the SR 520 corridor on July 28 that included representatives of U.S. DOT, the Battelle team, and the local partners. A day-long evaluation workshop was held on July 29. Members of the U.S. DOT, Battelle, and local agency teams discussed potential evaluation strategies, including analyses, hypotheses, data needs, and schedule. A PowerPoint presentation containing the preliminary evaluation strategy, analysis, data needs, and other information was distributed prior to the workshop. A summary of the workshop discussion was prepared and distributed to participants after the workshop.

Seattle/LWC UPA National Evaluation Strategy. The Seattle/LWC UPA national evaluation strategy was revised based on the discussion at the workshop and the completion of the National Evaluation Framework. The Seattle/LWC UPA evaluation strategy included the hypotheses/ questions, measures of effectiveness, and data needs for each of the 12 analyses. The strategy also included a preliminary pre- and post-deployment data collection schedule, possible issues associated with the evaluation, and approaches for addressing exogenous factors. The Seattle/LWC UPA national evaluation strategy was presented in a PowerPoint presentation, which was distributed to representatives of the U.S. DOT team and the Seattle partners on September 18, 2008. A conference call was held on October 7 to review and discuss the evaluation strategy. There was agreement among all parties on the Seattle/LWC UPA evaluation strategy and formal approval from the U.S. DOT was subsequently received to proceed with development of the Seattle/LWC UPA national evaluation plan.

Seattle/LWC UPA National Evaluation Plan. This document constitutes the Seattle/LWC UPA national evaluation plan. The report provides a background to the U.S. DOT UPA, describes the Seattle/LWC UPA projects, and presents the Seattle/LWC UPA evaluation plan and preliminary test plans. The draft report was distributed in July 2009 and reviewed with U.S. DOT and Seattle/LWC UPA partners during an on-site meeting or conference call. The plan has been finalized based on comments and discussions at the meeting or conference call. The document will guide the overall conduct of the Seattle/LWC UPA national evaluation.

Seattle/LWC UPA National Evaluation Test Plans. Based on approval from the U.S. DOT, the Battelle Seattle/LWC UPA evaluation team will proceed with developing separate, more detailed test plans for each type of data need for the evaluation, e.g., traffic, safety, etc. The preliminary test plans contained in the evaluation plan provide the basis for the more fully-developed test plans. In November and December 2009 the individual test plans will be developed, and reviewed with representatives from the U.S. DOT and local partnership agencies.

Baseline Data Collection. Based on approval of the Seattle/LWC UPA evaluation individual test plans, data collection activities for the pre-deployment period will be initiated. The general strategy is to collect one full year of baseline data, although when historic, archived data are available and helpful in establishing long-term trends and the influence of exogenous factors (such as gas prices) they will be utilized. The specific timing of baseline data collection will be identified in the full test plan documents to be developed in November and December 2009. By that time, WSDOT expects to know the specific estimated operational date for the SR 520 tolling and other major UPA projects. (Currently the WSDOT schedule calls for these projects to be operational as early as November 1, 2010 but not later than June 30, 2011). One project, the Redmond P&R/TOD, became operational June 30, 2009 and so the data collection timeline associated with that project is different.

Post-Deployment Data Collection. The general strategy is to collect one full year of post-deployment data. As with the baseline data collection, the final timing of post-deployment data collection will be identified in the full test plan documents after WSDOT has specified a final deployment schedule. Post-deployment data collection will begin sometime between November 2010 and July 2011 depending on the local partners' final deployment schedule.

Analysis and Evaluation Reports. Analysis of baseline data will begin once all of the data has been collected, sometime between November 2010 and July 2011 depending on the local partners' final deployment schedules . Analysis of early (e.g., the first several months of) post-deployment data will begin shortly after the beginning of post-deployment data collection. A technical memorandum on evaluation early results, based on four or five months of post-deployment data will be completed mid-way through the one-year post-deployment period. The final evaluation report is expected to be completed by approximately February 2012.

9"Urban Partnership Agreement Demonstration Evaluation - Statement of Work," United States Department of Transportation, Federal Highway Administration; November 29, 2007.