Section 3. Survey of Incident Responders
A survey instrument was developed to obtain information on how transportation, law enforcement, fire, and EMS/rescue agencies measure and report incident management performance measures in their jurisdiction. The survey instrument solicited information related to the following issues:
- How incidents are defined by agencies in their jurisdiction;
- How information about incidents is tracked and recorded;
- What, if any, measures they are collecting, calculating, or recording regarding incidents;
- What are the cost of collecting, processing, and reporting the measurement and source data;
- If agencies are not using any measures, why not;
- If they are planning to implement measures, why, when, and how;
- How each measure is defined and calculated or measured;
- How the measures were decided upon and by whom;
- How long performance measure data have been collected and calculated;
- To whom the measures are reported, and how often;
- With whom the measures are shared;
- What the recipients do with the measures;
- What decisions are made based on or are influenced by the measures;
- How the recipients feel about the measures (i.e. are they meaningful, are they timely, do they provide the information necessary for effective decision-making);
- The types of data collected about incidents, and the sources of the data;
- Whether similar data exists from other sources (especially other incident management partner agencies), whether the data from the different sources are compared to one another, and any findings from the comparison;
- What issues exist regarding measuring incident management performance, and how they have been dealt with;
- What are the best candidate measures, whether they are recording measures or not.
Methodology
TTI used a telephone-interview type of format to collect the information from the different transportation, law enforcement, fire, and EMS/rescue agencies. A series of questions were developed that represented the basic level of information to be obtained from each agency. A copy of the survey document is contained in Appendix D.
A pilot test of the survey instrument was performed prior to conducting the actual survey. The purpose of the pilot test was to verify that the wording of the questions were clear and concise, to fine-tune the data collection methodology, and to assess whether the questions provided meaningful response. Based on the results of the pilot test, the survey document was revised slightly to clarify some of the questions.
To conduct the survey, members of the research team initially contacted, via the telephone, each of the identified individuals to request their participation in the survey. During this initial contact, the researcher arranged a convenient day and time to conduct the survey or identify alternative contacts. The researcher also obtained either a mailing address or an e-mail address to which the survey questions could be sent. The researcher then forwarded the actual survey questions to the respondent prior to actually conducting the survey. This was done so that the survey respondent would have adequate time to prepare his or her responses to the questions.
At the scheduled day and time, the researcher contacted the survey respondent by telephone and administered the survey. The researcher documented the respondent's answers to each question. The researcher also asked probing questions to clarify the response to survey question. The responses were then coded into a spreadsheet to aid in analysis. This spreadsheet has been provided to FHWA under a separate deliverable.
Response Rate
A total of 54 individuals from 30 locations were identified as potential respondents to the survey. These individuals were identified from the following sources:
- The IEEE Incident Management Working Group,
- The ITE Traffic Incident Management Committee,
- The TRB Freeway Operations Committee,
- Personal contacts, and
- Internet searches of functioning traffic management centers.
A total of 23 individuals from 19 locations actually participated in the survey. The remainder of the individuals originally identified either did not reply to initial inquiries about participating in the survey, elected not to participate in the survey, or indicated that they did not have an active incident management program in their area.
TTI planned to use representatives from the transportation agencies to identify appropriate individuals in the law enforcement and emergency service agencies to survey. One problem with this approach was that respondents were often unwilling to provide contact information of representatives from other agencies that were responsible for incident management. This was because either they did not know the correct person at the appropriate level or did not want to increase the workload of these individuals with trying to respond to the survey. Therefore, most of the insight into the emergency services perspective was obtained through the literature and a limited number of survey responses.
Findings
Definition of Incident
Most of the transportation agencies surveyed agree with the TMDD definition of an incident. Most agencies define an incident as any unexpected event that causes a temporary reduction in capacity. The term "temporary" is an important modifier because it implies that after the agency performs some type of initial operation or response (i.e., clearing wrecked vehicles from the travel lanes, removing a spilled load, etc.) the roadway can be reopened and normal capacity can be resumed. For the most part, transportation agencies do not view highway maintenance and reconstruction projects and non-emergency events themselves as incidents, generally, because they are events that have planned means of accommodating traffic flow.
Most transportation agencies do not consider the long-range effects of an incident as part of the initial incident. For example, most transportation agencies would not consider the repair of a collapsed bridge deck, or the removal of spilled cargo that has been pushed beyond the shoulder area as part of an incident, even though an event that they would describe as an incident was the primary cause of the loss of capacity. This is especially true when recovery efforts extend over multiple days. Most transportation agencies tend to classify incident events as being over once the initial response to the incident event has left the scene and when more traditional traffic control (i.e., work zone type traffic control) has been established at the scene.
Interestingly, many transportation agencies also classify unexpected weather events (particularly snow and ice) as an "incident," because they typically cause temporary reductions in capacity (i.e., once the snow event is over and the roadways are cleared, the "incident" is over), increase the potential for secondary events (such as crashes and stalled vehicles), and more importantly, require a "response" from the transportation agency (dispatching of snowplows and de-icing equipment, etc.).
Some agencies also classify events involving select sensitive users, such as school buses, railroad crossing, etc. as incidents, primarily because these events may require special attention for political or public welfare reasons.
Generally, events have to be on a roadway facility itself or in the right-of-way to be considered as an incident by transportation agencies. Events that occur off the right-of-way, such as a structure fire, are not routinely thought of as "incidents" by transportation agencies. Some agencies do log these events in their incident management software and may broadcast messages about these events through their motorist information systems.
Classification of Incidents
One goal of incident management is to ensure that the appropriate response personnel and equipment is provided at every incident. To aid in determining the appropriate level of response, many transportation and emergency service providers have developed systems of classifying incidents. Table 2 shows how the survey respondents replied to questions concerning methods and criteria for classifying incidents in their local area. The table also shows how the level of severity of the incident effects each agency's response decisions.
Agency | Collision | Overturned Vehicle | Stall in Lane | Abandoned Vehicle In Lane | Stall on Shoulder | Vehicle Fire | Hazmat Spill | Abandoned Vehicle On Shoulder | Public Emergency | Debris Roadway | Other |
---|---|---|---|---|---|---|---|---|---|---|---|
Kansas DOT—Kansas City | Only incidents requiring police accident reports are documented. Kansas DOT is currently in the process of building a TMC. They hope to have it operational by the end of this year to early next year. Currently, the state police and service patrol (operated by the police) are the only incident management elements in place. The police provide the DOT with copies of the accident reports for accidents on their facilities. | ||||||||||
New Jersey DOT | Downed Utility Pole; downed signal pole; anything blocking a lane or shoulder | ||||||||||
Arizona DOT | |||||||||||
Ohio DOT—Columbus | Unexpected weather change | ||||||||||
Tennessee DOT | Anything effecting traffic flow | ||||||||||
Phoenix, AZ Fire Dept. | |||||||||||
Maryland State Hwy Admin—CHART | Anything effecting traffic flow | ||||||||||
Texas DOT—Austin | |||||||||||
Texas DOT—San Antonio | Weather; construction; maintenance | ||||||||||
Minnesota DOT—Minneapolis | |||||||||||
Caltrans—San Diego | |||||||||||
Incident Management Services—Houston | |||||||||||
Southeast Michigan COG—Detroit | |||||||||||
City of Houston—Police Dept. | Assist TxDOT | ||||||||||
New York DOT | Brush fire, pedestrian in restricted area, road work, traffic signal malfunction, non-recurring severe congestion | ||||||||||
Colorado DOT Lakewood | |||||||||||
Texas DOT—Houston | |||||||||||
Illinois DOT—Chicago | Ice on pavement, water main breaks, flooding, anything that blocks one or more lane for 30 minutes or more, school bus involvement, railroad crossing involvement, fatality. | ||||||||||
North Carolina DOT | Anything effecting traffic flow | ||||||||||
Connecticut DOT |
A common classification scheme that describes the severity of the incident and/or the urgency of the response does not exist. For the most part, transportation agencies tend to classify incidents into two to three categories based upon the degree to which traffic is likely to be impacted (severity) and/or the number of lanes blocked. Some of the criteria that transportation agencies use to classify incidents include the following:
- Number of lanes blocked;
- Estimated duration of blockage;
- Severity and/or number of injuries involved;
- Time-of-day;
- Presence of hazardous materials;
- Degree of damage to vehicles and/or infrastructure;
- Type of vehicles involved (e.g., trucks, buses, etc.); and
- Number of vehicles involved.
Emergency service providers, on the other hand, typically classify events based on the potential loss of life and/or the impact to public safety. Both of the emergency service providers use standards that have been defined by their industry as a means of classifying incidents. These standards take into account the presence of possible injuries or fatalities, and rely on dispatchers soliciting correct information from the individuals reporting the incidents.
Information Collected Per Incident
One attribute of a good performance measurement system is that data to generate performance measure be readily attainable in an economic manner. [1] This implies that in order for agencies to develop and use performance measures, the data must be readily available through their already existing systems. Responders are more likely to compute performance measures if they are already collecting the data to support them. Part of this survey effort was to look at what data is currently being collected by different agencies and how.
Table 3 shows what information many of the transportation and emergency service providers are collecting about each incident event. Based on the survey responses, at a minimum, the following information is recorded by most agencies:
- The roadway name where the incident occurred;
- The name of a nearby cross-street or location;
- The location of the incident in the lanes (i.e., which lanes are blocked);
- The type of incident;
- The time at which the incident was detected or reported;
- The time the first response vehicle arrived on the scene; and
- The time the incident was cleared from the scene.
Agency | Criteria | Thresholds | Response Variation |
---|---|---|---|
New Jersey DOT | Major, Minor. | Major incidents defined as those lasting more than one hour while minor incidents defined as those lasting less than 1 hour. | Minor incidents—use ITS (DMS/HAR) if applicable. For major incidents, review to see if need to send IM response team. Team consists of state trooper and DOT traffic operations person, get to scene and try to speed clearance of incident. |
Arizona DOT | Level 1, 2, 3 | Level 1—fatality; unplanned closure in one or both direction affecting
any state route; any incident involving HAZMAT,
homicide, trains, or school buses; Level 2—traffic flow is restricted; requiring live AzDOT presence; fences cuts, livestock on roadway, or guard rail damage presenting hazard to motorist; red indication out / stop sign knockdown; large dead animal in lanes; roadway damage (large potholes, gravel on roadway); disabled vehicle blocking flow; structural damage that does not close hwy; threat of jumper that does not close hwy Level 3—Yellow/green indication out; debris not blocking roadway; disabled vehicle not blocking roadway; Maintenance; anything that can be handled at supervisor discretion; anything not requiring immediate ADOT response |
What changes is who gets notified and how much of a hurry we are to
get responses from them. Level 1—notify Admin Major (includes ADOT Director, and State engineer, and District Engineer). Level 2—Notify Maintenance Supervisor by pager or phone. Level 3—notify supervisors via email, phone, radio. |
Ohio DOT—Columbus | Severity, time-of-day, congestion level | Lane blockages of more than one minute warrants activating DMS; DMS messages updated as lane blockage changes; Service patrol will work incidents expected to be under 15 minutes to clear, otherwise call for tow trucks | Incident response plan (IRM) addresses how to handle major incidents, stalled vehicles, debris, roadwork, congestion, fire/HAZMAT, freeway diversion. For minor fender benders, execute only what is helpful to motorist that doesn't cause a lot of inconvenience. For major incidents (e.g., fatality) and EMS is on the scene, execute full plan immediately. |
Tennessee DOT | Long term—debriefings and updates | ||
Phoenix, AZ Fire Dept. | Use universal system U.S. Fire Adm. (thru FEMA website) | Response bases on Inc. Management System (IMS)—developed in California published 1985. Dispatchers—rotate | |
Maryland State Hwy Adm—CHART | Property damage: person injured/fatality; Hazmat; emergency roadwork;—15 items out of FHWA Data Dictionary | If longer than 2 hrs shutdown, preplanned detour routes. Dependent on magnitude of incident, different levels of notifications is given to agencies. | |
Texas DOT—Austin | HCM Level of Service Criteria; Reported vs. verified | Compare current volume/occupancy measures to HCM thresholds. | No impact on operations—simply informational. Emergency services will look at speed. Haven't needed to classify incidents (respond to all incidents). Verified vs reported—if reported, will look to verify with CCTV and then clear. |
Texas DOT—San Antonio | Type of incident (i.e., debris, weather, accident). Severity of lanes closed; Severity of accident | Severity of lanes closed—2 or 3 lanes closed, classified as major incident. With crash scenes, major incident is one that requires EMS (get information via police). Major incident—when demand expected to exceed capacity. | TransGuide software system automatically prioritizes—major incidents over minor incidents, minor incident in open lane. System uses operator inputs (i.e., description of incidents) to driver scenario process. |
Minnesota DOT—Minneapolis | Major, Minor. | Judgment call by operator. Used past experience, type of incident, Time-of-day, expected duration of incident (i.e., any road closure or any incident during peak period, hazmat or rollover) classified as major | Major incidents—place motorist information system in overdrive. Broadcast radio messages every 10 minutes. With major incident, use DMSs to direct motorist to tune to station and continuously broadcast incident information. Will also call other media outlets. May pull in other operators if many going on at same time. |
Caltrans—San Diego | Use California Highway patrol's radio call system (10 codes, 11 codes) | Highest level codes, Caltrans will dispatch response immediately. With other codes, will wait until officer on-site. Will change response or dispatch response based on officers needs. | |
Incident Management Services—Houston, TX | Only respond to major incident involving 18-wheeler rollovers/lost loads. | ||
Southeast Michigan COG—Detroit | No defined criteria (i.e., delay threshold severity). Michigan State Police Criminal Justice Information Center has a system to capture this information called the Automated Incident Command System (AICS). | There are no documented thresholds that I know of but there might be something defined by the State Police. They work by guidelines and training found in the Incident Command System (ICS). They also have a Computer Aided Dispatch (CAD) that dispatches the appropriate personnel for a particular event. | The dispatcher determines the appropriate response after assessing the call or by the person responding to the call once at the scene of the incident. Appropriate responses scenarios might also be determined through the use of ICS and CAD systems. Assistance is provided by the Michigan Intelligent Transportation Systems (ITS) Center if it is a freeway incident through the use of the cameras. |
City of Houston, TX Police Dept. | Severity—Major/Minor; Location—Moving lane of traffic (right shoulder, left shoulder, lane(s) blocked—1 2 3 4 5 6 | Major = major freeway blockage; Minor = minimal freeway blockage | 90% of incidents detected by roving patrol; 6% dispatched from TranStar; clear minor incidents alone; assist with traffic control at major incidents; |
New York DOT | Combination of severity, anticipated duration, and time-of-day (e.g., peak or off-peak) | Level 1—no lane blocked - on shoulder; Level 2—1 lane blocked 0-15 min (peak) 0-30 min (off-peak); Level 3—1 lane blocked 15-30 mins (peak) or 30-60 mins (off-peak); Level 4—1 or more blocked 30-60min (peak) 60-120(off-peak); Level 5—road closure, 1+blocked 60 min(peak) 60-120(off-peak) |
The more severe the more they "throw" at it. They have communications with metro traffic and local media (if after metro traffic hours). Co-located in TMC with state police—get estimate from trooper for duration. Level 1–2: may or may not do anything. Higher levels—At first advise metro traffic/media of problem—if worse, recommend taking alternate route (but don't specify)—if really bad, recommend specific alternate route—more severe, use stronger DMS messages—use DMS to notify to tune to HAR—have 1 permanent HAR and 2 portable (1 portable being converted to permanent). |
Colorado DOT—Lakewood | Mile High Courtesy patrol handles minor incidents. The TMC only responds to major incidents—duration is the criteria used | 3-tier system for major incidents—total freeway closure or most
lanes blocked Level 1—duration less than 30 minutes; Level 2—duration 30 minutes to 2 hours; Level 3—duration over 2 hours |
Main response is public information. They have a broadcast fax system with 300 agencies/companies signed up including media, other public agencies, trucking firms, US military, US Postal Service, visitor centers, etc. Also post information on their website |
Texas DOT—Houston | Will follow that provided by law enforcement (Fatality/Injury = major, PDO = minor), as well as determining severity based upon lanes blocked and duration | Major: One lane > 30 min (TOD dependent); Two or more lanes >
15 min (TOD dependent); truck accidents, HazMat
spills, bus accident, multi-vehicle accidents Minor: Other incidents |
Different types of incidents require different level of response. For example, HFS is not contacted for a minor incident, however, HPD may be required and they are contacted the same as if it were a major incident. They are given all details known and it is left to them to determine their condition of response. |
Illinois DOT—Chicago | Severity—routine or incident; Lane blockage | 1 or more lane closed for 30 minutes or more; total freeway closure for 15 minutes or more; Hazmat | More documentation for incidents than "routines", more public awareness for more major incidents—media alerts, notify DOT personnel, DMS |
Agency | Roadway Name | Location/Cross-Street Name | Block Number | Detection Station # | Lat/Long | Location of Lanes Blocked | Incident Type | Incident Source | Current status of Incident | Time incident was detected (reported) | Time incident was verified | Source of incident verification | Time response vehicle arrived on scene | Type of response vehicles on scene | Time response vehicles left scene | Time incident was cleared from scene | Time traffic returned to normal flow | Roadway Surface Condition | Roadway Condition | Light Condition | Weather condition | Injuries present | # of vehicles involved | Type of Vehicles involved | Incident Severity (qualitative) | Other |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Kansas DOT, Kansas City | (1) | Property damage; diagram; names; vehicle makes; model, color, plate numbers | ||||||||||||||||||||||||
New Jersey DOT | ||||||||||||||||||||||||||
Arizona DOT | (1) | (2) | Route, direction, milepost, type of incident (accident with or without injuries/death); who was called out. | |||||||||||||||||||||||
Ohio DOT—Columbus | Miler maker system location | |||||||||||||||||||||||||
Tennessee DOT | Type of service; vehicle tag #; direction | |||||||||||||||||||||||||
Phoenix, AZ Fire Dept | (3) | Detailed info on injuries, seatbelts, child restraints; Trucks have live terminals and digital cameras to collect info | ||||||||||||||||||||||||
Maryland State Hwy Admin—CHART | ||||||||||||||||||||||||||
Texas DOT —Austin | System software records time that changes to any fields are made, including update to comments. | |||||||||||||||||||||||||
Texas DOT—San Antonio | (4) | (5) | System software records time reported, time entered in system, time system executed scenario, time scenario changed, time scenario over (when lane back open to traffic) | |||||||||||||||||||||||
Minnesota DOT—Minneapolis | (6) | |||||||||||||||||||||||||
Caltrans San Diego | # of lanes blocked | |||||||||||||||||||||||||
Southeast Michigan COG—Detroit | See attachment | |||||||||||||||||||||||||
Houston, TX—Motorist Assistance Patrol | Vehicle—make, model, color, year, license plate; Driver—male, female; number of occupants—driver only, 2, 3, 4+; motorist use of cell phone—# called, air time, motorist name & signature | |||||||||||||||||||||||||
New York DOT | (7) | (8) | Other highways affected (if any); which ITS devices activated—DMS, HAR | |||||||||||||||||||||||
Colorado DOT—Lakewood | Information collected for service patrol response to minor incidents only. There is currently no logging of major incident data (level 1, 2, 3 incidents) that the TMC responds to. | |||||||||||||||||||||||||
Texas DOT—Houston | (2) | Incident date; direction of travel; Before/After cross street | ||||||||||||||||||||||||
Illinois DOT—Chicago | ||||||||||||||||||||||||||
City of Houston, TX Police Dept | HPD staffs a single console at TranStar. While more specific information is collected by the officer in the field, HPD at TranStar only logs some general information—only for incidents that occur on the freeway system | |||||||||||||||||||||||||
North Carolina DOT | Information only for motorist assistance patrols | |||||||||||||||||||||||||
Connecticut DOT | ||||||||||||||||||||||||||
1. First on scene 2. removed from roadway altogether 3. Individual dispatched, on scene, and benchmark points 4. opening of lanes 5. also record under maintenance/construction 6. record weather at start of each shift as operator logs in 7. time stamp when entered into MIST 8. Not fields in software for this but try to indicate these in open comment field |
Interestingly, only eleven agencies reported that they record the time that an incident was verified. However, in further discussion with the respondents, it was revealed that, in many cases, time the incident was detected (or reported) and the time the incident was verified are frequently the same time.
Thirteen agencies reported that they record the time the first incident responders arrived on the scene. Similarly, slightly more than half of the respondents indicated that they routinely record the time the incident response vehicles leave the scene and/or the time the incident was cleared from the roadway. For the most part, agencies are primarily concerned with keeping track of the time that they implement or execute their response and are not overly concerned with recording the time that other responders perform certain functions.
Only one agency reported that they record the time that the freeway returned to normal flow. A few common reasons cited for not recording this measure include the following:
- It is too hard to determine when "normal" flow occurs;
- The congestion resulting from an incident last so long that operators tend to forget to go back and log when normal traffic flow occurs; and
- This time is not important to determining the effectiveness of the response.
Some respondents indicated that their software system automatically records the time (i.e., time stamps) every time the operator makes a change to the traffic control. For example, when the operator first initiates a message on a DMS, the time is logged by the system. If the operator changes the message, the time the new message is implemented by the system is logged. The advantage of this approach is that it takes the burden off the operator to log when certain changes are made.
Collection and Retention of Incident Data
Table 4 summarizes how the respondents replied to questions concerning the collection and storage of incident data. An approximately equal number of agencies use manual (seven of the respondents) and automatic (eight of the respondents) means of collecting incident data. Four agencies reported that they use a combination of manual forms and automated systems for collecting information about incidents. In a few cases where agencies used manual data collection means, the forms were later transferred into automated systems for further processing and storage.
Most agencies reported that their incident information either initially or eventually ended up in a database that could be queried. The survey also showed that information about specific incidents was generally kept for a long-time, with most agencies retaining their incident logs for three or more years.
Agencies were also asked if they integrated their incident reports with any of the other incident responders. The general response was "no"; however, some agencies did state they have plans to begin integrating their freeway management center systems with a 911 dispatching center so that data from other agencies could be merged with incident records. This is expected to increase both the quality and quantity of data about incidents at these locations.
Agency | How is this information collected? | What format is used to store information? | How long is information retained? | Is data integrated with other information? |
---|---|---|---|---|
Kansas DOT—Kansas City | Manual (1) | Receive paper file from state police, enter into a queriable Oracle database. No CCTV yet, highway patrol video for fatality. | 5 years to Forever | Highway patrol input accident data into accident report database. DOT automatically receives copy of any incident on DOT facility |
New Jersey DOT | Automatic | Queriable database | 8 years | No |
Arizona DOT | Automatic | Queriable database | 3 years | When the police work an incident, we are supposed to get their log number. These are not always made available to us. We usually enter these into the Road Condition report and enter the HCRS# into the documentation. |
Ohio DOT—Columbus | Manual (2) / Automatic | Service patrol fills out paper form, later entered into queriable database—Paradox. DMS message logged manually to compare accuracy of DMS electronic file log (new) | Not sure on the electronic files, permanent for database | No |
Tennessee DOT | Manual | Paper, entered into database | Since start in database (June '99). Paper not kept long term after entered into database | Some—major incidents w/ multiple agencies—debrief w/ police, fire, timeframe |
Phoenix, AZ Fire Dept | Both: All vehicles have geo id. Monitored by clock this tracks time
of arrivals, reposition, leave. Manual—Pictures; EMS data—handheld computer, download later |
Paper, electronic | Paper—3 yrs | Yes—police dispatch, census |
Maryland State Hwy Admin—CHART | Automatic | Oracle database | Started Feb 2000 keeping everything; before—5yrs on-site then paper to warehouse | In future plans: 911 centers: ability for other agencies (police, county) to access software & edit incident reports eventually |
Texas DOT—Austin | Automatic | Sybase | No deletion policy has yet to be developed. Quarterly off-load and access through Excel | No yet—only one incident done so far but not very detailed. Done to answer questions about response. Ad hoc requests—maintenance information about equipment failures |
Texas DOT—San Antonio | Automatic | Electronic files | Minimum of two years | System tied directly to 911 map—don't use one system to verify the other |
Minnesota DOT—Minneapolis | Automatic | queriable database—Access (since 2001); prior to '01—paper logs | Early '90 | Recent had FHWA intern perform big analysis were compared police logs to system logs. Do not routinely perform comparison. Done on as needed basis and when staff available. Do produce annual volume/crash frequency report |
Caltrans—San Diego | Manual | Paper files and electronic files | Less than 14 mo | When needed. |
Southeast Michigan COG—Detroit | Manual & Automatic | Data stored in both paper and electronic formats. SEMCOG requests copies of the database and we query it using MS Access | SEMCOG has only just started to gather this data (over the past 5 years). Have kept all of it so far | Try to cross reference the MSP 911 data with the Freeway Courtesy Patrol data (checking to see how long abandon vehicle have been out on the roadway after they have been identified). Also integrate the MSP crash data (UD10 forms/database) with the incident database. Also integrate the incident information with road attribute file with includes fields like: lane, 85%ile speed; posted speed; land use, vehicle classification counts, traffic volume counts, etc. |
Houston, TX Motorist Assistance Patrol | Manual & Automatic | Paper file, electronic files, queriable database—Access | Data generated by MAP is compiled by TTI and returned to TxDOT for storing. Don't know how long they keep it | Yes. TTI compiles information and breaks numbers down to percentages. |
New York DOT | Typed into MIST | Queriable database—Sybase | Current six months active in system (last week of 6 months falls off each week); burn 6 mo. Data every week to CD for backup | Service patrol logs to different system, but if working an incident DOT is entering into MIST, then cross-reference to service patrol record entered. |
Colorado DOT—Lakewood | Automatic—Service patrol calls dispatch, dispatcher enters all info into database. | Oracle queriable database | Indefinitely | No |
TxDOT—Houston | Automatic | Flat files—queriable database | Indefinitely | Not electronically. MAP files collected in same manner but different database |
Illinois DOT—Chicago | Manual | Paper file—shared with DOT traffic, maintenance, and claims department | 7 years | Cross reference state police records; ETP service patrol uses fill-in the dot data cards, will soon be upgrading; the data is not routinely compared but the capability is there |
City of Houston, TX Police Dept | Manual (3) Other (4) | Paper. The Access database is used to enter incidents during each shift (two shifts per day). At the end of the shift, the daily activity log is printed. The database only retains the totals for the shift (data on individual incidents not saved in the database—only on the printouts). The database is then used to prepare the monthly reports | Printouts of the daily activity logs are kept for 3 years. | |
North Carolina DOT | Manual (5) | Queriable database | Indefinitely (have been collecting for ~6yrs) | No |
Connecticut DOT | Paper and electronic | Incident reports are retained for 5 years | No | |
1. Accident Forms 2. Freeway service patrol incident log form 3. Accident reporting form filled out by officer in field, but does not go to TranStar 4. Incident data at TranStar is manually entered into an Access database 5. IMAP program—called to TMC entered into database on local PC, moving to webpage to consolidate information |
Incident Management Performance Measures
Table 5 shows the general types of performance measures that are routinely computed by the agencies responding to the survey. Only half of the agencies responding indicated that they routinely compute incident-related performance measures. Not surprisingly, most of the agencies that are computing performance measures reported computing the following performance measures:
- Incident frequency,
- Detection time,
- Response time, and
- Clearance time.
Operational Definition of Incident Management Performance Measures
Table 6 shows the operational definitions that each agency is using to compute these performance measures. Interestingly enough, most agencies define "detection time" as the time that they were notified of the incident (i.e., the time that the incident was reported to them in their control center). Detection time is not defined as the time between when an incident actually occurred and when the agency was notified of the incident (either from emergency responders, operator observation, and direct report from citizen).
Nearly all of the respondents indicated that they define "Response Time" as the elapse time between when the agency was first notified about an incident and when the first responder appeared on the scene. The primary difference in the way that agencies define response time is that emergency responders typically define response time as the time from when an incident was reported to their dispatcher to the time when their response vehicles arrive on the scene. Transportation agencies generally measure response time from when the call comes into the TMC (or service patrol dispatcher) to when first response vehicle arrives on the scene, regardless to which agency the vehicle belonged (i.e., this could be a fire vehicle, police vehicle, or service patrol vehicle). The problem with defining response time this way is that often times, the transportation agency does not have any control over when the emergency service providers are dispatched or the priorities that are assigned to different types of incidents. In many cases, the response time that is reported by many transportation agencies is actually the time between two unrelated events (i.e., notification of the incident and the dispatching and arrival of the response vehicles). This is especially true when the traffic management center (TMC) is not the first agency notified of the incident (which is generally the case in most metropolitan areas). Without integrating or comparing records from the dispatching agency, the response time may not represent the true response time of the first responder to the incident, but merely the time between unrelated events.
Clearance time is another measure that varies dramatically between freeway management operators and emergency service providers. For the most part, transportation agencies define clearance time as between when the first responder arrives on the scene (regardless of which agency they work for) to when the incident is cleared from the roadway. Emergency service providers typically define clearance time as the time between when the first of their units arrive on the scene to when their unit leaves the scene and can be deployed elsewhere.
Agency | Do you calculate Performance Measures? | What measures do
you routinely compute? |
||||||||
---|---|---|---|---|---|---|---|---|---|---|
Incident Frequency | Incident Rate | Detection Time | Response Time | Clearance Time | Number of Secondary Incidents | Time to Normal Flow | Incident Delay | Others | ||
Kansas DOT | Yes | (1) | ||||||||
Kansas DOT—Kansas City | No (2) | |||||||||
New Jersey DOT | Yes | |||||||||
Arizona DOT | Yes (3) | (4) | ||||||||
Ohio DOT—Columbus | No (5) | |||||||||
Tennessee DOT | No (6) | |||||||||
City of Phoenix Fire Dept | Yes | (7) | Severity; Nature of Damage; Injuries | |||||||
Maryland State Hwy Admin—CHART | Yes (8) | Delay hours; environmental impacts; frequency by location; # of disabled vehicles assisted | ||||||||
Texas DOT—Austin | No (9) | Error logs—preventative maintenance | ||||||||
Texas DOT—San Antonio | No (10) | |||||||||
Minnesota DOT—Minneapolis | Yes | (11) | ||||||||
Caltrans—San Diego | No (12) | |||||||||
Southeast Michigan COG—Detroit | Yes | Air quality—pollutants (e.g., amounts of VOC, NOx, and CO) | ||||||||
Houston, TX—Motorist Assistance Patrol | Yes (13) | (14) | Types of assists provided (used to stock supplies); location of incidents (by corridor, by segment) | |||||||
New York DOT | No | |||||||||
Colorado DOT—Lakewood | No | |||||||||
Texas DOT—Houston | No (15) | |||||||||
Illinois DOT—Chicago | Yes | Other performance measures such as response time, clearance times, and detection time have been calculated before but not routinely done. Only done periodically for program justification. | ||||||||
City of Houston, TX Police Dept | No (16) | |||||||||
North Carolina DOT | No (17) | |||||||||
Connecticut DOT | Yes | |||||||||
1. Use incident frequency to identify
high accident locations for improvements 2. Hope more will be done once TMC is operational 3. These can be gotten by database query. We do not use this data, but the districts use them to rate district-wide response times 4. Believe this is important, but they do not track it as a general rule 5. Do not have the funding for personnel to design, implement, and update performance measures 6. Under evaluation; Early stages through contract with University (Vanderbilt) 7. Police do and offer to Fire, don't use 8. University of Maryland prepares yearly report (1997 on web) 9. Too time consuming 10. City-wide incident management project—visually seen 40% reduction in clearance times 11. By type of responder 12. Not an issue before now—can recreate times based on logs 13. Most incidents also depend on arrival of other agencies (i.e., ambulances, other police agencies, and other emergency equipment needed) 14. Data collected but not currently used 15. This is an operations staff not a research staff. There is not the time or personnel available for this function. High accident locations are identified from the information and consideration given to these areas on a routine basis. TTI puts together an Annual Report for TranStar 16. That information has not been required 17. Problem is what performance measures to look at. In process of identifying for future |
Performance Measure | Agency | Operational Definition |
---|---|---|
Incident Frequency | City of Phoenix Fire Dept. | Time based, incident/shift, also calculate week, month, year and compare to last year |
Maryland State Hwy Admin—CHART | How often occurs at a given location (mile post) | |
Connecticut DOT | Any time there is a blockage of highway, an incident is established | |
Incident Rate | City of Phoenix Fire Dept. | # of incidents per month or year; look at each different category and calculate; use to shift response |
Maryland State Hwy Admin—CHART | ADT x # of incidents | |
Detection Time | New Jersey DOT | When DOT finds out about the incident |
Arizona DOT | Delay from the time that an incident occurs until it is reported | |
City of Phoenix Fire Dept. | 1st report to dispatch; if official (Police, city); ask them when they detected. Keep track of who reported incident (official or civilian) | |
Maryland State Hwy Admin—CHART | 1st person sees to calling it in | |
Texas DOT—San Antonio | System parameter (2 minutes)—use 20 sec interval data with rolling average (6 cycles). System usually 1 or so minutes after call | |
Caltran—San Diego | "Reported Time"—time when report comes into center | |
Houston Motorist Assistance Patrol | Time of notification, also driver estimate of time of occurrence | |
Connecticut DOT | The time the incident is reported to the TOC via surveillance equipment or verified phone calls | |
Response Time | New Jersey DOT | Time for DOT to get there |
Arizona DOT | Starts with live voice reports receiving page and then they are responding. Ends when unit reports they are on-scene. | |
City of Phoenix Fire Dept. | Time elapse between 1st dispatch contact to 1st vehicle on-scene | |
Maryland State Hwy Admin—CHART | Time call received until arrive on scene | |
Texas DOT—San Antonio | System logs time every time a change or update is made to response scenario | |
Minnesota DOT —Minneapolis | Time detected to time responders arrived on scene; camera-based; not perfect—only when operator observes when respond on scene | |
Caltran—San Diego | Time when 1st responder arrive on-scene | |
Houston Motorist Assistance Patrol | Dispatch time and time of arrival | |
Connecticut DOT | The time responders arrive on scene. Arrival time and response time are calculated for state police only out of the Bridgeport operations center coverage area. ConnDOT only contacts ITS internal responders such as bridge safety, construction, maintenance, and electrical and service patrol when required. The contact time and arrival time is then kept. Arrival time only for emergency responders such as EMS, wrecker, fire, and environmental protections is also noted. DOT does not normally contact these responders initially | |
Clearance Time | New Jersey DOT | Time between detection and incident cleared from scene |
Arizona DOT | When unit reports they are clear or when operator sees all units clear. This is for when the ADOT vehicle leaves the scene. | |
City of Phoenix Fire Dept. | Time fire department declares incident over, usually as driving away from scene | |
Maryland State Hwy Admin—CHART | How long from notification to clear, or until delays clear / all lanes open is what they use | |
Texas DOT—San Antonio | Time 1st vehicle arrives on scene until lanes open | |
Caltran—San Diego | Time when roadway opened | |
Houston Motorist Assistance Patrol | Time incident ends and clearing of incident from roadway | |
Connecticut DOT | The time the accident or debris is removed from the travel way | |
Number of Secondary Incidents | Arizona DOT | Accidents that occur back in queue |
City of Phoenix Fire Dept. | Count of accidents, injury, fire, hazmat each count as one not a different incident #; 1 incident with multiple parts | |
Maryland State Hwy Admin—CHART | Pinpoint incident is created by delay from previous incident, call by operator | |
Caltran—San Diego | Don't know how to compute | |
Houston Motorist Assistance Patrol | Time of notification | |
Time to Normal Flow | City of Phoenix Fire Dept. | Set by incident commander. Wait at scene until flow returns to normal for time. Subjective. |
Maryland State Hwy Admin—CHART | Back to operating capacity for time-of-day | |
Houston Motorist Assistance Patrol | When incident clears and blockage has been removed from freeway | |
Incident Delay | Maryland State Hwy Admin—CHART | Length of distance (5 mile delay) Max delay (example: 10 mile backup) |
Houston Motorist Assistance Patrol | Time of duration |
Most agencies agree that the number of secondary accidents resulting from an incident was a difficult measure to compute. In most cases, this was considered to be a subjective measure of the operator. One agency, however, defined a secondary accident to be any accident that occurred within a defined radius and time frame of the first incident. Both the distance and time parameters changed by time-of-day to reflect the different levels of congestion that forms around incidents.
Maryland defines incident delays in term of queue distance. They generally use measures such as the length of congestion (e.g., a five-mile delay or a 10-mile backup) to help define incident delays. Queue distance is a parameter that can be observed almost instantaneously via the surveillance cameras while delay requires that the time it takes drivers to pass through the congestion be measured.
Origins of Performance Measures
In Table 7, respondents were asked about the origin of the operational definitions being used to generate the performance measures (i.e., the driving force behind the generation of the performance measures they are currently using). Several of the respondents indicated that the performance measure that they are currently generating were developed by FHWA and are being used by FHWA and their local administration to monitor their performance over time.
Several other of the respondents indicated that the measures they are currently using have evolved over time. As objectives of the control center changed or as new tasks and capabilities were added, new performance measures were added or old ones have been modified to reflect the new objectives of their system.
Interestingly, both of the emergency service providers that replied to the survey indicated that they have been collecting performance measures that are standard for their industry. It appears that these performance measures are used as a resource management tool for evaluating staffing and asset allocations.
In an attempt to gain insight into other potential performance measures, each respondent was asked if there were other performance measures that were not currently being generated by their system, but would be desirable or helpful to analyzing the effectiveness of the incident response in their area. Table 8 summarizes the responses obtained to these questions. For the most part, agencies' response fit into two categories. One group of agencies wants to generate more of the traditional performance measure (such as incident frequencies, incident rates, detection time, response time, etc.) while the other group wants to collect performance measures that relate to administrative and institutional issues (such as operator workload, camera utilization by other entities, web page hits, etc.). Most agencies, however, basically agree that better quality of data needs to be entered into their systems to make the performance measures more meaningful.
Agency | How were these operational definitions derived? By whom? What was the process for deriving them? Were other agencies involved? If so who were they and how? |
---|---|
New Jersey DOT | Derived over time, FHWA and management of traffic operations at DOT have asked for it |
Arizona DOT | The software developers were in-house. They actually asked the operators what they wanted. We found out what management wanted, and told the developers how we wanted to amass the data. We kept the screens simple and eliminated the garbage as we found we didn't use or management didn't need what the screen or a button was offering. We also deleted things that would not work (Emergency notification systems). Driven by available funds. |
City of Phoenix Fire Dept. | Labor management committee that deals with performance measures (3 union officers; 3 fire dept. managers; shift commanders, exec. office). 1960's. Devised definitions for measures and guides, reviewed annually |
Maryland State Hwy Admin—CHART | Work w/ FHWA over years, standard definitions |
Texas DOT—Austin | Developed by Traffic Operation Divisions at Headquarters |
Minnesota DOT—Minneapolis | Look at data recorded to see what information can be tracked over time. Looking for trends that can be addressed (e.g. Highway Helpers) |
Southeast Michigan COG—Detroit | By SEMCOG and the Metro Detroit Incident Coordinating Committee |
Houston—Motorist Assistance Patrols | We are a police agency. We follow normal police data gathering according to our Department SOP |
Connecticut DOT | General knowledge from other agencies thru 1-95 Corridor Coalition |
Agency | Are there other performance measures that you are not collecting but think would be beneficial? |
---|---|
New Jersey DOT | Incident frequency, rate, secondary accidents, and incident delay |
Tennessee DOT | Interfacing w/ police records ==> high incident rates, commuter times/speeds |
Maryland State Hwy Admin.—CHART |
Balance of operator workload; tow response to scene |
Texas DOT—Austin | Institutional issues ==> camera control (other agencies causing problems); web page hits (how many people looking at cameras) |
Texas DOT—San Antonio | Travel times; partial restoring of capacity (i.e., when lanes where opened) |
Minnesota DOT—Minneapolis | Better quality of information |
Southeast Michigan COG—Detroit | Haven't really given it much thought only because we are focused on making the data better (more accurate). For example, a call may be taken and dispatched but the officer can't locate any incident so instead of clearing the call the record is left with no clear time or any explanation as to why the data is missing. |
Houston—Motorist Assistance Patrols | No |
New York DOT | Would like to collect response time, clearance time, resumption of normal flow, and times individual lanes were open/closed. Got an estimate of $100K to upgrade MIST for these add-ons—not being pursued right now. |
Illinois DOT—Chicago | Detection time—improving *999 and CCTV; Response time—collecting data to calculate response time but not aware of it being used. |
City of Houston, TX Police Department | Clearance time |
Costs of Generating Performance Measures
One objective of this task order was to capture information about the costs associated with collecting, processing, and reporting performance measures for incident management systems around the United States. Almost all of the responding agencies indicated that it was impossible to separate the costs of producing performance measure reports from their typical operating costs. For the most part, agencies consider the cost of collecting data for producing performance measures and performance measure reports as part of their normal operations, and the costs associated with producing special performance reports (such as those requested on demand) are included as part of their normal operating budgets. Table 9 summarizes a few of the responses received from individuals when questioned about the issue of costs.
Agency | What would your estimate of cost to be for collecting, processing, and reporting you performance measures? |
---|---|
Arizona DOT | The cost to set up the decision, notification, data collection system that is used for this was part of the AzTech funding. |
Maryland State Hwy. Admin—CHART | Contract with University for performance measures |
Caltrans—San Diego | Not a way to separate costs for this specific function |
Incident Management Performance Reports
The respondents were also surveyed as to the type, frequency, and use of reports they produced that documented the performance of their incident management systems. These responses can be found in Table 10 through 13.
Only eight of the responding agencies indicated that they routinely produce reports so they could monitor the performance of their incident management systems over time. Most of these agencies are reporting their performance measures on a system-wide basis. Five of the agencies also indicated that they routinely produce performance reports by roadway segment, and by facility as well. Many of the agencies reported that their software/data management systems are flexible enough to generate performance measure reports at any level.
Table 11 shows the frequency at which the responding agencies produce performance reports while Table 12 summarizes the uses of the performance reports. The frequency at which agencies produce performance reports varies greatly and seems to be a function of their use. Almost all of the transportation agencies that responded indicated that they produce performance reports on a monthly or quarterly basis. Monthly reports are generally used by the operations staff to track use of resources and include such information as the number and type of incidents, the type of responses (or assistance), the devices and/or resources used to manage the incident, the schedules of staff, and the high incident locations. Mid-level administrative staff generally use quarterly reports to assist in the coordination of incident responses across institutional and/or jurisdictional boundaries.
Both of the fire and police agencies that responded to the survey indicated that they generally produce daily reports of the "incidents" (not just those related to traffic operations) that they work. Watch commanders generally use these reports to assess the workload and readiness of the various units to respond to other types of incidents.
Agency | By Facility | By Segment | System Wide | Other |
---|---|---|---|---|
Kansas DOT—Kansas City | Accident frequency can be on any of these levels | |||
New Jersey DOT | ||||
Arizona DOT | (1) | |||
Ohio DOT—Columbus | ||||
Tennessee DOT | ||||
City of Phoenix, AZ Fire Department | ||||
Maryland State Hwy. Admin—CHART | Upon request | |||
Texas DOT—Austin | Monthly reports on LCU failures; communications errors | |||
Texas DOT—San Antonio | Everytime something is changed, system documents time; therefore, have complete "history" of response | |||
Minnesota DOT—Minneapolis | By responder on monthly basis; also produce annual crash/volume report, by location | |||
Caltrans—San Diego | By incident | |||
Southeast Michigan COG—Detroit | ||||
Houston, Tx—Motorist Assistance Patrols | ||||
New York DOT | ||||
Colorado DOT—Lakewood | ||||
Texas DOT—Houston | ||||
Illinois DOT—Chicago | ||||
City of Houston, TX Police Department | ||||
North Carolina DOT | ||||
Connecticut DOT | ||||
1. Think they are generated systemwide, but know they are grouped by Districts and ORGS (small operating units). Districts then examine the reports specific for their area. |
Agency | How often are they produced? |
---|---|
New Jersey DOT | Monthly |
Arizona DOT | Quarterly |
City of Phoenix, AZ—Fire Dept. | Daily (Captain gets his last shift & last shift before he arrived) |
Maryland State Hwy. Admin—CHART | Monthly—number of incidents by reg; assists; use of devices (monthly meetings); Annually—big picture by University, legislature, other agencies |
Texas DOT—Austin | Quarterly |
Texas DOT—San Antonio | As Needed basis—have done 2 system wide evaluations; also use on-line survey on homepage to gauge motorist responses (subjective) |
Minnesota DOT—Minneapolis | Monthly and yearly—incidents by type and response; special days (e.g., snow days) |
Caltrans—San Diego | As needed basis—some annual (accidents); monthly—for meeting purposes |
Southeast Michigan COG—Detroit | Monthly (for operators); quarterly (coordinating committee); and annually (program evaluation) |
Houston, TX—Motorist Assistance Patrol | Quarterly |
Colorado DOT—Lakewood | Monthly |
Illinois DOT—Chicago | Annually |
City of Houston, TX Police Dept | Daily; monthly |
Connecticut DOT | As needed basis; monthly |
All of the agencies indicated that they also produce annual reports for their systems. These annual reports generally provide an overall summary of the performance of the system and give a "big picture" view of the effectiveness of the system. High-level administrators typically use these annual reports to provide justification for continued operation or expansion of their incident management programs. These reports are also used to identify high incident or "hot spot" locations.
Several agencies indicated that they would occasionally produce performance measure reports on individual or specific incidents. These reports are generally produced on an "as needed" basis and are used to critique the performance of the response agencies and to address problems with the responses to specific incidents. Generally, transportation agencies use these reports as a mechanism for improving coordination between response agencies.
Agency | How are these measures generally used in your system? |
---|---|
New Jersey DOT | Feds look at it, not really used by DOT though |
City of Phoenix, AZ—Fire Dept. | 1) Response planning; 2) Budget planning; 3) Quality Assurance (10% detailed check); 4) Internal Assessment—by command officers, mostly fire side |
Maryland State Hwy Admin—CHART | To get funding (big picture report); identify "hot spots" |
Texas DOT—Austin | Access queries through Sybase |
Minnesota DOT—Minneapolis | Generally tracking trends; in past month or two started generating reports to track operators; use w/ media for political support |
Caltrans—San Diego | Automatically by the system software |
Southeast Michigan COG—Detroit | They are provided to the Incident Management Coordinating Committee, MDOT, and the FCP operators. They are also provided to the MSP, as requested, for selective enforcement. MDOT uses the information for determining the benefit of the FCP program and to obtain additional funding for expansion. |
Colorado DOT—Lakewood | Statistics, program justification |
Illinois DOT—Chicago | Incident frequency/rate used in justification of service patrol, used to determine locations for safety improvements |
City of Houston, TX Police Dept. | Not sure how they are used |
Connecticut DOT | Can be used to evaluate staffing schedules, determine high accident locations, and evaluate effective response time and performance. |
Arizona DOT | We use them to prove we are achieving our goals |
Texas DOT—San Antonio | Justify giving less money to ITS |
Houston, TX—Motorist Assistance Patrol | To determine success of program and deputy performance ratings. |
Respondents were also asked to indicate whether they thought these performance reports were timely, useful, and accurate. Table 13 summarizes these responses. While most of the respondents generally felt the reports were timely and provided decision-makers with the appropriate level of information they need, a few questioned the usefulness (particularly from the viewpoint of the operators) and the accuracy of the information. Several respondents indicated that they did not exactly know how the higher-level administrators in their agencies actually used the information.
Agency | In general, do you think
the information in these reports or the performance measures themselves
to be … |
Provide the information necessary for
effective decision-making? |
||
---|---|---|---|---|
Timely? | Useful? | Accurate? | ||
New Jersey DOT | Yes | Yes (1) | Yes | No (2) |
Arizona DOT | No (3) | Yes | No (4) | Yes |
City of Phoenix, AZ—Fire Dept. | Yes | Yes (5) | Yes | Yes |
Maryland State Hwy Admin—CHART | Yes | Yes | Yes | Yes |
Texas DOT—Austin | No | Yes | Yes | Yes |
Minnesota DOT—Minneapolis | Yes | Yes (6) | No (7) | Yes |
Caltrans—San Diego | Yes | Yes | Yes | |
Southeast Michigan COG—Detroit | Yes | Yes | Yes | Yes |
Houston, TX. Motorist Assistance Patrol | Yes | Yes | Yes | Yes |
Colorado DOT—Lakewood | Yes | Yes | Yes | Yes |
Illinois DOT—Chicago | Yes | Yes | Yes | Yes |
City of Houston, TX Police Dept. | Yes | Not Sure | Yes | Not Sure |
Connecticut DOT | Yes | Yes | Yes | Yes |
1. Somewhat—not enough “meat”
to be really useful, just break down number of incidents over and under
one hour, by type, monthly average incident duration, etc. 2. Don't know enough to capture enough 3. Quarterly reports are up to 3 months behind today 4. It depends on where you get the data—somehow different people can find different numbers 5. For targeted audience 6. Over time 7. Based on operators view—not as good as could be |
Integration of Incident Records and Information
Agencies were also asked about the kinds of incident information other agencies kept and their efforts to use this other information to supplement data used to develop incident management performance measures. Their responses are summarized in Tables 14 and 15.
Although many agencies are aware of other sources of incident records (such as 911 dispatching logs), relatively few agencies indicated that they routinely integrate response information about incidents with other agencies (such as fire and police). Several agencies mentioned, however, that efforts were underway in their areas to integrate police and fire computer-aided dispatching (CAD) systems with their freeway management systems. These agencies anticipated that integrating 911 CAD dispatching with their systems should greatly enhance response and record-keeping capabilities.
Several agencies indicated that they do combine information (or harmonize information) with police and/or emergency response agencies on an "as needed" basis. Generally, this involves taking information for the transportation agency's logs and matching them with information on the police or fire incident report forms. In those few cases when this is done, it is generally done as part of a debriefing effort between agencies after a major incident or as part of the preparation for litigation. Generally, when this is done, agencies find the exercise to be fruitful in helping to establish a timeline of response events to a specific incident, which, in turn allows them to more readily identify problems or bottlenecks in the response process.
Issues Involved in Establishing an Incident Management System
Table 15 shows how various agencies responded to questions concerning the issues faced when establishing an incident management system. Common issues cited include the following:
- Bringing agencies together to work in a coordinated and integrated fashion;
- Expanding the system to meet new objectives or added functionality with limited resources;
- Being the "new guy on the block" and having to establish a good working relationship with other response agencies;
- Providing consistent training for all agencies responsible for responding to incidents;
- Working with emergency services to strike a balance between providing a safe work environment for responders and maintaining traffic flow past the incident;
- Maintaining security of the system and confidentiality of data without effecting performance or response;
- Getting accurate information entered into databases without overburdening operators with too many data entry screens;
- Asking operations centers to do too much with too little resources; and
- Involving private towing industry in development of system.
Agency | Do other agencies (such as fire, police, DOT, etc.) keep similar information about incidents in your jurisdiction? |
---|---|
Kansas DOT—Kansas City | State Police, Service Patrol |
New Jersey DOT | Police and fire keep information like number of incidents, but only part of the same information that the DOT collects |
Arizona DOT | No. They cover different aspects of the incident |
Ohio DOT—Columbus | Yes—police, service patrol |
Tennessee DOT | 911 center log—no interaction |
City of Phoenix, AZ—Fire Dept. | Yes—other fire departments in valley (outside jurisdiction) |
Maryland State Hwy Admin—CHART |
Police and fire keep accident reports. All police reports go to DOT to look at for traditional statistics of accidents. |
Texas DOT—Austin | Have project to integrate ATMS with CAD system—automatically generate reports—operator will verify incident |
Texas DOT—San Antonio | Police—incident report on call, keep when they arrive on scene and when cleared; Fire—own method of notification, on file at district |
Minnesota DOT—Minneapolis | No. Now have CAD linked to State Patrol |
Caltrans—San Diego | No. Other do, but haven't tried to integrate |
Southeast Michigan COG—Detroit | Yes, I assume so but probably not to the degree SEMCOG does (with all the integrated data). |
Houston, TX.—Motorist Assistance Patrols | Yes, TxDOT |
New York DOT | State police use incident cards. Fire, EMS keeps records of dispatch, arrival, departure times but no traffic incident information. |
Colorado DOT | No |
Texas DOT—Houston | Please contact those agencies. Three law enforcement agencies, City and County Traffic and METRO the local transit authority are also housed at TranStar. They have access to the incident database as well as access to input data. To the best of our knowledge they do not do so. |
Illinois DOT—Chicago | State police, service patrol |
City of Houston, TX Police Dept. | Yes—TxDOT, MAP |
North Carolina DOT | Police reports |
Connecticut DOT | Yes |
Agency | Do you integrate or compare information with other agencies? | If so, When? | If so, How Often? | If so, How ? | What are generally your findings when this occurs? |
---|---|---|---|---|---|
Kansas DOT—Kansas City | No | ||||
New Jersey DOT | Share information with Delaware regional planning organization, DOT planning unit for congestion management program | ||||
Arizona DOT | No. They cover different aspects of the incident | Partnering sessions between DPS and state | Quarterly | Given as a presentation with report as supporting documentation | Does not change the state of how things are handled. |
Ohio DOT—Columbus | Haven't compared yet—requested that information six months ago and just now receiving data from City of Columbus public safety and police department to compare with service patrol, hope to show reduction in accident rates due to service patrol and TMC | ||||
City of Phoenix, AZ—Fire Dept. | Yes | January | Annual formally; informally more often (phone) | Across all 26 cities in agreement, written copies to chiefs | |
Maryland State Hwy Admin—CHART | Starting to look at this w/ police and 911 centers | ||||
Texas DOT—Austin | Yes | As needed | As needed | Hardcopy—TMT response to specific incidents | Information similar—similar time stamps, when responders showed up on scene. Records state change in TCD response |
Texas DOT—San Antonio | Hope to integrate with Police CAD system | ||||
Minnesota DOT—Minneapolis | No. Now have CAD link to State patrol | Accident reports w/ highway patrol MinnDOT compare to State—on as needed basis | Generally good. Lot of incident not accidents. See crashes that don’t have accident reports. Stalls are big incident source. | ||
Caltrans—San Diego | Yes | For specific reason—may debrief after major incident; serve in court case | Infrequently, rare | ||
Southeast Michigan COG—Detroit | Yes | Whenever we can | Using GIS | Still being determined. | |
New York DOT | Yes | Can find out from state police (co located). Time incident came in—can use to enter more accurate detection time than time stamp from MIST when entered (for major incidents) | May get CAD system in future, be able to query other agency activities. | ||
Texas DOT—Houston | Law enforcement does not share information readily with the DOT | ||||
City of Houston, TX Police Dept. | No | ||||
North Carolina DOT | Yes | Varies—regular meeting in areas to critique incident management | Monthly | Meeting of interagency Committee | Depends on area. Don't want to point fingers in area. Good information for improving response. |
Agency | What kinds of issues were faced when setting up the system and how were they resolved? |
---|---|
Kansas DOT—Kansas City | Current system is incident management manual. Manual is posted on website (www.kdot1.kfdot.org/public/kdot/kcmetro/kcindex). Website also includes press release, lane closures, etc.; before, had problems with police/fire unnecessarily blocking lanes (e.g., fire block 2 lanes to extinguish brush fire, police not clearing lanes fast enough; before, multiple agencies may respond to major incidents. No way to notify media, because each agency might want to use different diversion route. Now 30 cities, 12 counties, 2 states cooperate, use incident manual Juanita developed. She talked to each agency before developing manual to get input, then again after created to explain need for prompt response and clearance. Manual has planned diversions for specific locations, list of contacts, and also describes what agencies cover what, and when to notify other agencies including other states and federal agencies. Manual is updated 2 times/year. All agencies receive e-mail to notify of manual updates. |
New Jersey DOT | Have problems trying to expand. Feds are behind expansion 100 percent as is the MPO, but design wants to spend money for paving, etc. |
Arizona DOT | We went from a Phoenix-only based operation to a statewide center. Created institutional barriers within the state DOT as local employees started to handle statewide system issues. Financial barriers were encountered in the form of communications needs. Operations were found to be non-uniform across the state. Training for the handling of incidents was found to be inconsistent. Creation of standards for training. |
Ohio DOT— Columbus | It is going to take some time to develop a real collaborative effort with all of us to understand that we work for the same employer—the taxpayer. City police work real well on freeway, understand the importance of quick removal of lane blocking incidents. Have problems with the fire department blocking too many lanes (e.g., blocking three lanes for a one lane blocking incident). Had a recent event where multiple units on the side of the freeway with the incident blocked extra lanes. An additional fire unit arrived on the other side of the freeway and blocked the inside lane, they were not needed but remained on scene in the vehicle. Police did not make them clear the area. Have heard fire agencies in other areas act similarly, may need Washington to act to change. Need better communication system between agencies, currently using cell phones. |
Tennessee DOT | They are the "new guy". Initially, had warm welcome at scene. Has greatly improved over years. Quick clearance issues w/ fire dept. Trying to add this to fire training; Memorandum of understanding with TennDOT and local |
City of Phoenix, AZ—Fire Dept. | System very old, built like snowball (began in 1945 with chiefs meeting and sharing; 1960 expanded kept information; 1971 began paramedics; 1977 HAZMAT); At each expansion, obstacles were City Manager asking why greater funds; labor sees this as extra added to their job—collecting was a pain—automation has minimized this. |
Maryland State Hwy Admin—CHART | Hard to get code that is user (operator) friendly from contractor (off-the-shelf)—want to create custom software |
Texas DOT—Austin | How do we use the system—when/how do we pull information from the system |
Texas DOT—San Antonio | Security (keeping the system safe so someone can't corrupt the system) and confidentiality (displaying accidents without notify family, police need more detailed personal information than traffic) |
Caltrans—San Diego | Too much to do; too little resources |
Houston, TX—Motorist Assistance Patrols | Funding—type of vehicles to use, type of services to offer; Funding—created a public/private partnership; Vehicles—Carrying capacity and safety of vehicle; Services—determined type of incidents that might occur while driving. |
Colorado DOT—Lakewood | Getting accurate information to database, increased training; Response/clearance times reduced now through cooperation with police. DOT has provided police units with courtesy patrol radios, so courtesy patrol can contact police directly from the scene if police involvement needed. |
Texas DOT—Houston | When the integrated incident management database was developed, input was requested of all TranStar partner agencies. This included Law Enforcement and Transit. There were features requested by Law Enforcement that have never been used because they choose not to get involved in inputting data. However incorporating these features expanded the database GUI beyond what was needed by TxDOT causing operators to have to sift through more functions than were required. However, it was deemed that too much was better than too little. |
IlDOT | Private towing industry complaints when starting up service patrol, those issues were ironed out over time. Some opposition to using tax dollars for service patrol, but have showed that the peak periods are shorter with the patrol than without. Been in the incident management business for 40 years, none of those guys left to talk to. |
North Carolina DOT | Turf battles between agencies—face-to-face talks |
Most Important Things To Be Measured in Incident Management Program
As a final question in the survey, respondents were asked what were the most important things to be measured in an incident management program, whether or not they were currently collecting the particular performance measures. Their responses are contained in Table 17.
Almost all of the agencies agreed that monitoring time-related performance measures was important for gauging the success of an incident management program. Important time-related performance measures to the monitored include the following:
- Response time,
- Duration on scene,
- Clearance times, and
- Detection times.
Many also cited the need to have performance measures that relate to the quality of the service being provided, or to quantify the ability of the system to monitor and effect a change in the traffic control. Several performance measures that agencies mentioned along these lines include the following:
- The amount of delay caused by incidents in the system;
- The road user costs associated with congestion caused by incidents;
- The reduction in the overall delay caused by incidents;
- The reduction in the total duration of the incident (how long lanes were blocked); and
- The reduction in driving time of the public through incident scenes.
Agency | In your opinion, what are the most important things to be measured, whether or not you are currently collecting? |
---|---|
New Jersey DOT | Delay caused by incidents; road user costs, B/C—how incident duration is reduced by ITS |
Arizona DOT | Notification, detection, response time, on-scene time, clear time, and closing of incident |
Ohio DOT—Columbus | It differs from urban area to urban area. The incident managers need to define their worst enemy, e.g., Hazmat, roadway geometries, weather, etc. and collect data before and after program implemented to show reduction in performance measures for program justification. |
Tennessee DOT | Time of clearance—moved to shoulder or exit; # of response units—make sure isn't people there that don't need to be |
City of Phoenix, AZ—Fire Dept. | Time related measures; quality (of performance) related measures; info to tie performance to specific budget expenditures |
Maryland State Hwy Admin.—CHART | More data you have, better off you are |
Texas DOT—Austin | Response time; traffic control device changes; when response is provided, who/how many need—right now, we are more interested in did we do something, and not necessarily when we did something; finding information and making sure public has access to it. |
Texas DOT—San Antonio | Incident detection time; power of system that allows you to make changes in system; ability of system to monitor system and recommend changes; quality of information (data)—direct impact on response; good PR program |
Minnesota DOT—Minneapolis | Response time; clearance time—when they arrive, when they are out of lanes, and when total clear; on-site measures to ensure scene safety |
Caltrans—San Diego | What decision-makers are doing; when is significant to people and decision-makers |
Southeast Michigan COG—Detroit | Clear times, time it takes to return to free flow conditions, time and locations of occurrences, location of abandoned vehicles |
Houston, TX. Motorist Assistance Patrols | Services offered, reduction in delays in driving time for the public due to traffic incidents |
New York DOT | Response time; clearance time; resumption to normal flow; times individual lanes opened/closed; secondary accidents—can reduce if get the work out quickly of existing incidents |
Texas DOT—Houston | Accident: location, frequency, time of day, surface conditions; Detection: time, method frequency; Response time; Clearance time; time required to dissipate the queue. Quantitative differences in these areas by type of incident |
Illinois DOT—Chicago | Cause and effect of incident; Incident type vs. congestion factor; Will be upgrading computers and software—new database should improve information data collection and reporting. |
City of Houston, TX Police Dept. | Time incident occurred; location—street and intersection; response time; clearance time; lane closure information |
North Carolina DOT | Incident duration; response by agencies; effectiveness of response |