Emergency Transportation Operations

Section 4. Summary and Recommendations

Summary of Findings

The following represents a summary of the major finding from research conducted as part of this task order:

  • Transportation agencies define incidents differently than emergency service providers. Transportation agencies typically define an incident to be any unexpected event that causes a temporary reduction in the traffic carrying ability (i.e., capacity) of a facility. Emergency service providers use the word "incident" to describe any event to which they have to respond, whether it is on the roadway or not. Usually these events involve situations where there is the potential for loss of life, possible injuries, property damage, or potential criminal activities.
  • While the actual measures vary slightly from location to location and between agencies, most transportation and emergency service providers are currently using performance measures to assess how well their incident management systems are functioning.
  • Both transportation and emergency response providers recognize the need for collecting and storing information about incidents. Transportation agencies generally collect information about all aspects of traffic incidents (such as the arrival and departure times of all response vehicles). Emergency service providers generally collect information only related to their agency (i.e., the response time of fire trucks to the incident scene).
  • Transportation agencies generally use performance measure to quantify the effectiveness of the overall incident management process, while emergency service providers generally use the information as a resource management tool to justify additional staffing and equipment.
  • Most transportation agencies use the following measures to assess the performance of their incident management systems:
    • Number (or frequency) of incidents;
    • Detection time;
    • Response time; and
    • Clearance time.
  • For the most part, emergency service providers use "response time" and time spent on scene. Measures such as the number of secondary incidents and the time to normal flow are difficult to define and collect without using operator judgment.
  • While most transportation agencies indicated that they define "detection time" as the time differential between when an incident occurred and when it was first detected or reported to any official response agency, most only record "detection time" as the time of day at which the incident was reported to the TMC.
  • Both transportation agencies and emergency service providers use "response time" as a critical performance measure; however, the operational definition of this measure varies significantly. Transportation agencies generally define "response time" as the time differential between when an incident was reported to the TMC to when the first responder from any official response agency arrived on-scene. Emergency service providers generally define "response time" as the time differential between when a call was received by their dispatcher to when their first response vehicle arrived on-scene.
  • The operational definition of "clearance time" also varies considerably between transportation agencies and emergency service providers. Transportation agencies typically define "clearance time" as the time differential between when the first responders arrive on the scene to when the capacity of the facility has been fully restored (i.e., when the incident has been removed from the travel lanes). Emergency service providers define clearance time as the time when all or most of the response equipment is again ready to respond to another event at another location.
  • Emergency service providers define incident duration (or total time spent at the scene) as the time differential between when they first received a request for service (i.e., issued an alarm) to when they have been cleared to leave an incident scene. Transportation agencies generally define incident duration as the time from when a TMC is alerted of an incident until when the incident has been cleared from the roadway.
  • The performance measures (and the way that they are defined) used by emergency service providers are fairly standard across their industry. National reporting database (such as the National Fire Incident Reporting System) have caused emergency service providers to adopt common terminology and collect data in a consistent manner. For transportation agencies, the type and manner in which performance measures are defined are local decisions.
  • Many transportation agencies are currently producing performance reports routinely. Reports are frequently produced on a monthly, quarterly, or annual basis. Mid-level administrators are generally using monthly and quarterly reports to assist in managing assets and resources. Higher-level administrators use annual reports.
  • While most agencies are willing to share incident information and performance measures with other agencies, this is rarely done, except on an as needed basis to evaluate a response or address a specific problem that has occurred at a particular incident.
  • At some locations, emergency service providers and transportation agencies are beginning to work towards integrating dispatching and incident management recording keeping systems. This should allow for more accurate and better quality data from which to develop incident management performance measures.
  • Most transportation agencies use a combination of automated and paper-based systems to gather performance measure data, but one common complaint about these systems is that the quality of information in their databases needed to be improved significantly.

Recommendations

First, incident management officials need recognize that having a "one size fits all" approach for incident management performance measures may not be possible. The same set of performance measures that are used to evaluate the more routine types of traffic incidents (such as an two-vehicle collision, or a stalled vehicle) cannot be used to assess the performance of the system during complex, major events (such as a multiple vehicle collision involving multiple fatalities and/or serious injuries with major structural damage). It is recommended, however, that all agencies reconstruct and review the timeline of response events that occur with such incidents to identify and resolve potential problems with the responses prior to another major event.

For the more "routine" type of incidents, there seems to be a need for two sets of performance measures. The first set would be used to describe the overall effectiveness and responsiveness of the incident management process in a region. Administrators in the various response agencies could use this first set of performance measures to identify mechanisms for improving response and coordination between agencies. This first set would include measures such as the following:

  • Incident Notification Time – This would represent the time it takes for all the appropriate response agencies to become aware of an incident. It would be computed by taking the time differential between when the first detection/report of an incident to any agency (whether it be fire, police, 911-dispatch, or TMC) to when the other response agencies also receive notification of the incident. This performance measure would need to be computed separately for each of the official response agencies.
  • First-Responder Response Time – This would represent what many transportation agencies and emergency service responders are calling "response time". This performance measure would be the time differential between the first report of an incident to any agency to when the first official responder from any agency arrived on the scene.
  • Incident Assessment Time – This time would represent the duration it takes the first responder to determine what needs to be done to clear the incident and when capacity of the roadway is first partially restored. This performance measure would be defined as the time differential between when the first responder arrived on the scene and when the first action is taken to fully or partial restore capacity (for example, opening one previous blocked lane of traffic).
  • Total Blockage Duration – This time would represent the total amount of time that freeway capacity is reduced. This performance measure would be defined as the time differential between when the first responder arrived on the scene to when the freeway capacity was fully restored (i.e., all lanes opened).
  • Total Incident Duration – This time would represent the total amount of time that the incident had an effect on traffic operations. This performance measure would be defined as the time differential between when the event was first reported to any official response agency until when the last official response vehicle left the scene.

Other statistics that agencies may want to collect include the following:

  • The frequency (or percentage of total incidents) at which each official response agency was the "first detector."
  • The frequency (or percentage of total incidents) at which each official response agency was the "first responder."
  • The frequency (or percentage of total incidents) where capacity was partially restored.
  • The frequency (or percentage of total incidents) at which each official response agency was the last to leave the scene.

Obviously, this evaluation becomes more feasible and practical for locations where recording keeping systems from all the response agencies are integrated and coordinated. Being able to perform this type of analysis requires that the evaluator have the capabilities for constructing a complete timeline across agencies for every incident. Recognizing its complexity, it is recommended that this type of evaluation occur annually in most regions.

The other set of performance measures that agencies may want to consider collecting would be those that are directly related to their own specific mission in the incident management process. An example of this type of performance measure would include the "response time" that most emergency service providers and service patrol operations are currently collecting. These types of performance measures would be generally geared toward helping agencies track the use of resource or to assess an agency's performance towards a specific objective (i.e., the fire department's objective is to have a 3 minute response time to all alarms).

In most locations in the United States, the role of the transportation agencies (with the exception of service patrols) is one of support and demand management. For the agency specific performance measures, transportation agencies, and in particular TMCs, need to develop objectives and performance measures that more directly related to their specific mission in the incident response process. Examples of these types of performance measure might include the following:

  • The time lag between when an incident was reported to a TMC and when devices were activated on the roadway;
  • The average delay to motorists through an incident site;
  • The average queue length associated with different incident types;
  • The average amount of diversion generated by the traffic control devices used in managing an incident.

How to actually measure these performance measures directly in the field and how they relate to the objectives of a region's incident management process is the subject of future research.

Suggestions for Future Research

Historically, transportation research has focused on identifying techniques and strategies for improving the "response" side of the equation (i.e., how do we detect incidents quicker, how can we get police and fire agencies to respond quicker to incidents, how can we clear the incident faster, etc.). While this reducing response times and restoring capacity is critical to managing an incident, it is only half of the equation and, to a large degree, out of the direct control of the transportation agency. While coordinating responses with emergency service providers is essential and perhaps can provide the greatest order of magnitude reduction in congestion, transportation agencies cannot assert much influence over how quickly emergency service providers response and clear incidents. Because most of the response process is out of the control of a transportation agency, we believe that the research emphasis needs to drift away from looking at what transportation agencies can do to reduce detection and response times to incidents and focus more on the harder questions of how incident management systems can be used to influence the "demand" side of the equation. Examples of the types of questions that need to be explored through additional research include the following:

  • What are agencies trying to accomplish with their incident management systems? By activating traffic control and motor information systems in response to incidents, what kind of impact are agencies trying to affect on traffic operations? What are agencies hoping to accomplish?
  • How effective are the response techniques (the DMSs, the ramp metering system, the lane control signals, etc.) at reducing the amount of delay caused by motorists, encouraging diversion, etc.? How do agencies measure the effectiveness of these devices and strategies in real-time?
  • How do we need to change our detection and surveillance systems to be able to measure the effectiveness of our incident management strategies?
  • What are the incremental impacts of combining traffic control devices (e.g., lane control signals coupled with DMS signs, the systematic use of ramp meters, etc.) during incident conditions?

Previous | Next