Construction of e-Permit/VWS Model Sites: Project Summary Report
Laurel, Kentucky and Unicoi, Tennessee
Chapter 6. Site Performance (Task 5)
Following installation and calibration, a site performance review commenced. This chapter summarizes the test results and metrics collected.
Each site utilized the Smart Roadside Inspection System (SRIS) as an automated tool to assist enforcement officers in screening commercial vehicles. The system consists of two subsystems:
- License plate reader.
- USDOT number reader.
The license plate reader and USDOT reader subsystems were evaluated over a period of several days and various weather conditions at the Laurel County, Kentucky and Unicoi County, Tennessee sites. As previously identified, the Kentucky site screens commercial motor vehicles (CMV) on a highway with mobile enforcement monitoring from off-site. The Tennessee site screens CMVs on a ramp with mobile enforcement present on-site to conduct enforcement activities. Overall system performance was measured using an identification rate by combining the performance of the license plate reader and USDOT reader and evaluating how often the system can capture at least one identifying piece of information on the vehicle correctly.
This initial system performance testing was supplemented by a separate test conducted by Oak Ridge National Laboratories (ORNL).
The remainder of this chapter consists of three sections. The first describes the system features and functionality tested by the project team. The second provides the team's system performance summary at both sites. The third details the subsequent performance test conducted by ORNL, their recommendations, and the team's response to those recommendations.
Features and Functionality Tested
Table 4 below identifies the key site processes/functionality, how the functionality was tested, and if the functionality was satisfied during the test. It is based on a project team assessment of functionality found in USDOT e-permit/Virtual Weigh Station Architecture, version 1.2, August 18, 2011.1
Table 4. Virtual weigh station functionality assessment.
e-Permit/VWS Process/Functionality |
Description |
How Functionality Will be Satisfied |
Compatibility with National ITS Architecture |
How Functionality Has Been Satisfied |
Obtain identifying information. |
Automatically capture commercial vehicle identifier(s), e.g., license plate numbers. |
ALPR and AUNR will capture license plate and USDOT numbers and convert to digital character strings. |
'Identification information' flow of CVO03 and CVO06. |
The ALPR and AUNR have been confirmed to capture license plate and USDOT numbers and convert to digital character strings. |
Identify vehicle. |
Cross-reference identifying information with relevant databases to identify truck and carrier. |
License plate/USDOT numbers will be referenced to appropriate databases (e.g., CVIEW) which will return truck and carrier information. |
'Identification information' flow of CVO03 and CVO06. |
The license plate/USDOT numbers have been confirmed to reference appropriate databases, returning truck and carrier information. |
Collect measurement data. |
Automatically capture commercial vehicle weight and dimensions via on-site sensors. |
WIM will measure vehicle weight, length, axle spacings, and class, package the data into a standard message, and forward it to the screening system. |
'CVO weight and presence' flow of CVO03 and CVO06. |
The WIM has been confirmed to measure vehicle weight, length, axle spacings, and class and the data has been confirmed to have been packaged and forwarded to the screening system. |
Correlate vehicle identification and measurement data. |
Uses data from the Identify vehicle and Collect measurement data processes to create a Vehicle Transaction Record with identification information and weight measurement data. |
SRIS will automatically correlate these data and output a transactional record for screening. |
'Identification information' and 'CVO weight and presence' flows of CVO03 and CVO06. |
SRIS has been confirmed to automatically correlate the identification and measurement data, outputting a transactional record for screening. |
Conduct screening. |
Automatic querying of State back office systems against preset business rules and generation of alerts to enforcement staff. |
Screening algorithm will perform queries of State databases and screen using rules set by test participants (TN Only—KY will confirm data in the Kentucky Assistive Technology Service (KATS) Network). |
'Roadside electronic screening' and 'Roadside WIM' equipment packages of CVO03 and CVO06. |
SRIS has been confirmed to perform queries of State databases and screen using rules set by test participants in TN. In KY the project team provided access to the vehicle data and (KATS) performs the screening using rules set by test participants. |
Alert enforcement. |
"Push" alerts provided to enforcement staff to notify them of suspected violations. |
Users can configure SRIS software to provide visual, audio, or other alarms of trucks flagged for further scrutiny. |
'Information on violators' flow of CVO03. |
SRIS software has been confirmed to provide visual, audio or other alarms of trucks flagged for further scrutiny. |
Direct CMV action. |
Processes for notifying drivers of screening results and/or directing them to a location for further assessment. |
Kentucky—Screening data will be monitored by enforcement staff in the I-75 weigh station in Laurel. Staff will be dispatched to intercept CMVs on U.S. 25 as required/available.
Tennessee—Trucks will pull into the existing facility when it is open and staffed by officers, who can weigh vehicles with portable scales and conduct inspections as needed. |
'CVO pass/pull-in message' flow of CVO03. |
Direct CMV action has been confirmed through the use of KATS in the I-75 weigh station and staff have intercepted CMVs on U.S. 25 as required/available. |
Capture enforcement action. |
Process to create a record of any enforcement action taken for a given Vehicle Transaction Record. |
Enforcement personnel will update Vehicle Transaction Records as they do now. |
N/A. |
This has been confirmed by enforcement personnel updating vehicle transaction records as they do now. |
Update central database. |
Processes to update the State back office system with the results of enforcement actions. |
Updated Vehicle Transaction Records to be uploaded back to State back office system (e.g., CVIEW). |
'Violation notification', 'daily site activity data', 'citation', and 'CV driver record request' flows of CVO03. |
N/A—SRIS is a user of the data rather than a creator of the data. |
Refresh on premise database. |
Obtain a static copy of the State back office system, including CVIEW to allow for near-real time screening. (Alternately, a direct connection to the State back office system may be used if the site has high-speed connectivity to that system.) |
N/A—Both sites will feature a hard-wired Internet connection into the State system to allow for real-time screening and updating. |
'Violation notifications', 'daily site activity data', 'citation', 'CV driver record request', 'safety status information', 'credential information', 'credential status information', and 'CV driver record' flows of CVO03. |
N/A—both sites will feature a hard-wired internet connection into the State system to allow for real-time screening and updating. |
Test Overview
Evaluation Criteria
The evaluation criteria started with the analysis of the camera subsystems and then combines those results to report on the overall identification rate. The camera subsystem analysis is a two-step process that involves collecting data and then reviewing the results and annotating whether the reader achieved the correct decode or if not, why the reader had trouble with that image. A standard annotation has been developed by the technology vendor to assist in consistently measuring system performance. Definitions for these annotations are given in table 5 and table 6 for the license plate reader and USDOT reader respectively. A result can only belong to a single category.
Competing systems often include a "Not Machine Readable" category to capture the reason for a failed decode relating to poor image quality, sun and shadow related effects, and exotic or very difficult to read fonts. We include all events that would normally fall under this category in our performance analysis and assign them as either "Incorrect" reads or "No Reads" because the system is expected to always capture high-quality images suitable for optical character. In addition, the system is expected to read all legal USDOT fonts and license plate fonts and therefore fonts that were exotic or difficult for a machine to read (but not a human) were included in the analysis. This provides the most accurate representation of the performance of each system and an accurate reflection of system performance.
Table 5. Result definitions for license plate reader.
|
Definition |
Code |
Correct |
All the digits on the license plate matched the decode result perfectly (no added digits, no missed digits, and no incorrect digits). |
CO |
Incorrect |
Not all of the digits on the license plate matched the decode result (one or more digits added, missed, and/or incorrect). |
IN |
Not Read |
System did not locate a visually verified front license plate. |
NR |
Not a Commercial Vehicle |
The vehicle was not a commercial vehicle (FHWA class 1, 2, 3, and 4), e.g., recreational vehicle, car, SUV, van, pickup truck. |
NCV |
No License Plate |
The commercial vehicle did not have a front license plate. |
NLP |
Excluded |
A damaged, highly bent, very dirty, chipped, or occluded license plate that was the cause of a failed read. This category includes unexpected driver behavior, including when the vehicle was outside of the lane markers. |
EX |
Table 6. Result definitions for U.S. Department of Transportation reader.
|
Definition |
Code |
Correct |
All the numbers on the USDOT reader matched the decode result perfectly (no added numbers, no missed numbers, and no incorrect numbers). |
CO |
Incorrect |
Not all of the numbers on the USDOT matched the decode result (one or more numbers added, missed, and/or incorrect). |
IN |
Not Read |
System did not locate a visually verified USDOT number. |
NR |
Not a Commercial Vehicle |
The vehicle was not a commercial vehicle (FHWA class 1, 2, 3, and 4), e.g., recreational vehicle, car, SUV, van, pickup truck. |
NCV |
No USDOT Number |
The commercial vehicle did not have a USDOT number. |
NUS |
Excluded |
A damaged, very dirty, chipped, or occluded USDOT number that was the cause of a failed read. USDOT numbers that did not meet the FHWA standards are placed in this category. This category includes unexpected driver behavior, including when the vehicle was outside of the lane markers. |
EX |
After annotating each result, the read rate is computed by the equation in figure 8:
Figure 8. Equation. Read rate.
Where NCO, NIN and NNR denote the number of results labeled with the Correct, Incorrect, and Not Read categories respectively during the time periods of analysis. Therefore, the vehicles with a code of NCV, NLP/NUS or EX are excluded in the computation of read rates. Figure 8 applies to the computation of read rate for both license plate reader and USDOT reader.
The identification rate is a measure of the system's ability to correctly decode either the license plate or USDOT number. The identification rate can be computed directly from the license plate reader and the USDOT reader results and annotations are assigned per table 7.
Table 7. Result definitions for identification rate.
|
Definition |
Code |
Correct |
Either the license plate was decoded correctly or the USDOT number was decoded correctly or both were decoded correctly. |
CO |
Not a Commercial Vehicle |
The vehicle was not a commercial vehicle (FHWA class 1, 2, 3, and 4), e.g., recreational vehicle, car, SUV, van, pickup truck. |
NCV |
No Identification |
The commercial vehicle had neither a front license plate nor a USDOT number (i.e., the commercial vehicle was labeled as NLP for license plate reader and labeled as NUS for USDOT reader). |
NID |
Excluded |
The commercial vehicle had code of EX for license plate reader and code of NUS or EX for USDOT reader, or had code of EX for USDOT reader and code of NLP or EX for license plate reader. |
EX |
Incorrect or Not Read |
The commercial vehicle was not CO, NID, or EX. In another word, the commercial vehicle had code of IN or NR for license plate reader and non-CO code for USDOT reader, or had code of IN or NR for USDOT reader and non-CO code for license plate reader. |
INNR |
The identification rate is computed according to figure 9:
Figure 9. Equation. Identification rate.
Where NCO and NINNR denote the number of results labeled with the Correct, and Incorrect or Not Read, respectively. Vehicles with a code of NCV, NID or EX are excluded in the computation of identification rate.
Test Results
Overall system performance was found to be quite good. The read rate for the license plate reader performed at around 80 percent at both locations, while the USDOT read rate varied from 77.8 percent in Laurel County to 82.2 percent in Unicoi County. The identification rates were 89.4 percent for Laurel Country, KY and 92.7 percent for Unicoi County, TN.
The subsystem read rates and the identification rate, as assessed for two days during October, 2015, is summarized in table 8. Note that a correct identification is based on a correct decoding of either the USDOT number or license plate (or both).
Table 8. Summary of overall system performance.
|
License Plate Reader—Read Rate |
USDOT Reader—Read Rate |
Combined Systems—Identification Rate |
Laurel County, KY |
77.4% |
77.7% |
89.4% |
Unicoi County, TN |
79.9% |
82.2% |
92.7% |
Kentucky
Testing at the Laurel County, KY site commenced at 7:00 a.m. on October 18, 2015 and ended at 7:00 a.m. on October 20, 2015. The weather on October 18 was overcast with rain in the late afternoon and evening. The weather on October 19 was a bright and sunny day with no rain. A total of 1,291 vehicles passed the VWS during the time with a length greater than 25 feet. (This length was used to filter out most small vehicles and reduce the number of vehicles requiring review.) A summary of performance at the site is shown in table 9 below.
Table 9. Laurel County, Kentucky virtual weigh station performance summary.
|
License Plate Reader—Read Rate |
USDOT Reader—Read Rate |
Combined Systems—Identification Rate |
Day |
77.0%
(Note: Sunlight through trees behind the camera affected daytime performance on October 19, 2015.) |
78.8%
(Note: Rain affected the license plate reader in the late afternoon on October 18, 2015.) |
89.0% |
Night |
78.4% |
74.3% |
90.2% |
Overall |
77.4% |
77.7% |
89.4% |
The license plate reader performance at Laurel County was found to be weather-dependent. The camera performed best during sunlight but suffered during dark overcast periods. In addition, nighttime rain created a reflective surface on the road that confused the camera. The license plate reader achieved an 88.5 percent identification rate from 7:00 a.m. to 2:00 p.m. on October 18, 2015 just before the dark weather and rain started highlighting the variability in the performance with respect to the external conditions.
The USDOT reader performance was significantly lower at the Laurel County site compared to Unicoi County because of the sun shining through large trees behind the camera creating complicated shadow patterns across passing trucks for about four hours. An analysis of the USDOT reader during overcast weather on October 18, 2015 revealed that the day performance was significantly higher at an identification rate of 83.6 percent. The USDOT reader performance at Laurel is likely highly dependent on the amount of sunlight and position of the sun at different times of the year. Removing the trees behind the camera would improve the performance of the USDOT reader.
In addition, a small percentage of drivers crossed the center line and this affected both license plate reader and USDOT reader performance. Vehicles that did this were removed from the study.
Tennessee
Data was collected at the Unicoi County, TN site between October 16, 2015 at 2:00 p.m. and October 19, 2015 at 2:00 p.m. local time for a total of 72 hours. A total of 1,268 vehicles were recorded and analyzed during that time.
Site performance at Unicoi is shown in table 10. Overall performance was higher than at the Laurel County, KY site for both the License Plate Reader and USDOT Reader systems, and system performance was much more consistent with respect to weather or time-of-day.
Table 10. Unicoi County, Tennessee virtual weigh station performance summary.
|
License Plate Reader—Read Rate |
USDOT Reader—Read Rate |
Combined Systems—Identification Rate |
Day |
79.8% |
84.5% |
94.2% |
Night |
80.2% |
77.5% |
89.7% |
Overall |
79.9% |
82.2% |
92.7% |
Oak Ridge National Laboratory Tests
Oak Ridge National Laboratory (ORNL) was tasked by FHWA and FMCSA to conduct an independent evaluation of the reliability and accuracy of the information collected at the Laurel County, KY and Unicoi County, TN sites.
Methodology
ORNL collected data from both sites in January 2016. For each site, the data was divided into two sets and the information used to analyze the reliability and accuracy of the parameters collected. (For more detailed definitions of "reliability" and "accuracy" refer to Section 2.1 of "Reliability and Accuracy of Laurel County, Kentucky and Unicoi County, Tennessee Virtual Weigh Stations: Final Report.") Reliability and accuracy are like the read rate and identification rate metrics used by the study team, though the methodology used to determine scores differed.
The first data set "All Data" was the entire collection of records that was downloaded by ORNL from each site. This information was used to assess the reliability of the system at identifying meaningful information for the parameters under consideration (i.e., USDOT number, license plate jurisdiction and number, and some inferences about number of axles, and vehicle weight). Because of the large dataset (thousands of observations), the assessment of the reliability of these parameters has a strong statistical significance.
Reliability R (in percent) is defined in figure 10 as:
Figure 10. Equation. Reliability.
Where Nt is the total number of observations (i.e., data sample size) and Nv is the total number of valid observations (i.e., the total number of observations that contain meaningful—true or false—information). Notice that always Nv <= Nt.
Even in cases were no direct or independent measurements were made, system reliability boundaries could be determined. For example, given a set of records where the system provides the number of axles of the vehicles, records which show vehicles with 0 (zero) or 1 (one) axle can be counted as having unreliable (and inaccurate) information. In the same way, vehicles weighing more than 150,000 lb. as determined by the system can be counted as having unreliable and inaccurate information.
However, when the parameters are within logical values (e.g., number of axles between 2 and 11, or vehicle weight between 25,000 lb. and 80,000 lb.), it is necessary to use external methods to assess the validity of these parameters. This led to the creation of a second data set, "Selected Data."
Starting from the oldest record, the analyst selected one record randomly and by using the images collected by the system determined the outdoor conditions in terms of light (daytime or nighttime) and weather condition (clear, rainy, etc.). Then thirty or more consecutive observations were selected and added to the second set. After that, another record was selected at random and the outdoor conditions verified. If a subset with these conditions had not been yet selected, thirty or more consecutive observations were chosen and added to the second set. If the outdoor conditions were already included in the second set, then the analyst selected a different record at random and used the same methodology to determine whether to select data for the second subset. The second data set—"Selected Data"—for each site consisted of about two hundred observations that were used to visually compare the information contained in the images to the information extracted from these images by the system.
Other parameters, such as vehicle total weight, axle weight, and axle spacing required direct measurements of those parameters. The Tennessee Highway Patrol (THP) randomly inspected selected vehicles that traveled by the system at the Unicoi County site, using portable scales (calibrated regularly and certified to have at most +/-3% error) to weigh each axle of the inspected vehicle, and measuring tapes to determine the axle spacing. (A similar analysis was not conducted at Laurel County, KY due to safety concerns.) Electronic forms were provided by ORNL so the officers could enter the information collected in the field as well as the information provided by the system for the same vehicle.
ORNL used the "Selected Data" set to compare the values of the parameters against the information that could be seen on the images captured by the system for USDOT number and license plate information (as well as other parameters such as number of axles, for example). Because of this, reliability measures are presented in some of the results two ways.
The "Selected Data" set only was used to determine system accuracy. Because in some cases it was not possible to visually corroborate some of the parameters provided by the deployed system using the captured images (e.g., in some cases it was not possible to visually determine the jurisdiction shown on a license plate due to low the quality of the image) two types of accuracy observations were defined: 1) Y* defined as the number of observations for which it is not possible to corroborate the information provided by the system (e.g., impossible to visually determine the License Plate Jurisdiction). The "benefit of the doubt" is given to the system and the observation is labeled as accurate; and 2) Y defined as the number of observations for which it is possible to visually corroborate the information provided by the system and the information is found to be accurate.
The ORNL report used two definitions of accuracy. Absolute accuracy Aa (in percent) is defined as shown in figure 11.
Figure 11. Equation. Absolute accuracy.
Where Nt is the total number of observations (i.e., data sample size) and Nta is the total number of accurate observations (i.e., the total number of observations that contain information for which accuracy can be corroborated). Notice that always Nta <= Nv <= Nt.
Relative accuracy Ar (in percent) is defined as shown in figure 12. Notice that Aa <= Ar.
Figure 12. Equation. Relative accuracy.
Results
The initial assessment found that the Unicoi County, TN site was working reasonably well but that the Laurel County, KY site was not functioning properly. For example, at the Laurel County site, images of the side of some vehicles were sometimes mixed with images of the license plates of different vehicles. At times, the system was triggering even with passenger cars. The January 2016 data from Unicoi County was analyzed as part of this review. For Laurel County, ORNL discarded the January 2016 data and instead analyzed data from March-April 2016 after the system was modified. (Raw data used in the analysis is found in Appendixes B and C of "Reliability and Accuracy of Laurel County, Kentucky and Unicoi County, Tennessee Virtual Weigh Stations: Final Report.")
Laurel County, Kentucky
To test accuracy, 214 observations (Selected Data) were manually corroborated from a pool of 16,176 observations (All Data) collected during the time from March 19, 2016 to April 18, 2016.
For the license plate jurisdiction measure, accuracy was difficult to determine. The analyst could clearly see the jurisdiction which also matched with what the system automatically provided in just 121 cases out of 214 cases. In an additional 30 cases, the analyst could not definitively determine that the system was wrong, so those records were evaluated as accurate/non-corroborated (Y* = 30). This table (and all other tables in this section) contains two measures of accuracy: the absolute accuracy and the relative accuracy. The former is computed using as the denominator all the observation, while the latter only used the number of reliable information. Both measures are useful, but the second gives an idea of accuracy in cases where it is possible to filter out unreliable information. Table 11 below shows the results.
Table 11. License jurisdiction reliability and accuracy (Laurel County, Kentucky).
Data Set |
Measure |
Count |
Percentage |
Selected Data |
Reliability |
184 (214) |
85.98% |
Selected Data |
Absolute Accuracy (Y*+Y) |
151 (214) |
70.56% |
Selected Data |
Relative Accuracy (Y*+Y) |
151 (184) |
82.07% |
Selected Data |
Absolute Accuracy (Y) |
121 (214) |
56.54% |
Selected Data |
Relative Accuracy (Y) |
121 (184) |
65.76% |
All Data |
Reliability |
14,340 (16,176) |
88.65% |
While the reliability (meaningful information could be identified) of this parameter is relatively high, accuracy (correctness) of the information was very low. Of all the observations, between half and slightly over two thirds presented an accurate value for the License Jurisdiction parameter. System accuracy was the worst during clear nights.
The License Plate Number parameter had a reliability of 100 percent for the selected data and 99.4 percent for the entire dataset for the Laurel County site (i.e., the system generated some N/A or Null values for this parameter). The absolute accuracy was 69 percent for the selected data and slightly lower (i.e., 68 percent) for the entire dataset analyzed.
For the USDOT number parameter, reliability was slightly less than 84 percent while accuracy was around 70 percent. This system was most reliable during the day in light rain and the highest accuracy was achieved during clear nights.
ORNL also analyzed the combined reliability and accuracy of license plate jurisdiction and number and the USDOT number, which are required for the identification of a vehicle. Table 12 shows that the reliability and accuracy of the combination of these three parameters is lower than any one parameter taken separately. While the reliability was just about 70 percent, the accuracy of the combination could be as low as 30 percent (absolute) 41 percent (relative).
Table 12. License plate information and U.S. Department of Transportation number reliability and accuracy (Laurel County, Kentucky).
Data Set |
Measure |
Count |
Percentage |
Selected Data |
Reliability |
157 (214) |
73.36% |
Selected Data |
Absolute Accuracy (Y*+Y) |
79 (214) |
36.92% |
Selected Data |
Relative Accuracy (Y*+Y) |
79 (157) |
50.32% |
Selected Data |
Absolute Accuracy (Y) |
64 (214) |
29.91% |
Selected Data |
Relative Accuracy (Y) |
64 (157) |
40.76% |
All Data |
Reliability |
11,273 (16,176) |
69.69% |
The deployed system had a high reliability at assessing the number of axles that a vehicle had, about 97 percent. The accuracy was slightly lower (as low as 92.5 percent), with the worst outdoor conditions for both reliability and accuracy being nighttime clear conditions. Comparing the number of axles versus the number of axles with an identified weight (a value other than "0" or "NULL") also showed high reliability (94 percent with Selected Data, 98.6 percent with All Data) and accuracy (96 percent) values.
Unicoi County, Tennessee
For the Unicoi County site, 181 observations (Selected Data) were manually corroborated from a pool of 7,509 observations (All Data) collected from January 4, 2016 to January 31, 2016. The Selected Data was comprised of 35 daytime-cloudy, 36 daytime-partly cloudy, 40 daytime-sunny, 35 daytime-clear, and 35 nighttime-rain observations.
For the license jurisdiction analysis, in only 87 cases out of 181 the analyst could clearly see the jurisdiction which also matched with what the system automatically provided. In an additional 42 cases, the analyst could not definitively determine that the system was wrong, so those records were evaluated as accurate/non-corroborated (Y* = 42). While the reliability of this parameter is relatively high, the accuracy was very low. Of all the observations, between half and slightly over two thirds presented an accurate value for the License Jurisdiction parameter. Rainy nights were the worst-case weather condition for both reliability and accuracy measurements.
The license plate reader performed significantly better at the Unicoi County site than at the Laurel County site. As shown in table 13, the license plate number metric had a reliability of 100 percent for both the selected data and the entire dataset for the Unicoi County site (i.e., the system never generated a N/A or a Null). The accuracy was 77.4 percent (note, the absolute and relative accuracy measures were the same since the reliability was 100 percent). For this parameter, and similarly to License Plate Jurisdiction, the best outdoor conditions were daytime partly cloudy. However, the worst conditions were found during a sunny day, possibly due to reflection or the low number of observations conducted as part of this test.
Table 13. License plate number reliability and accuracy (Unicoi County, Tennessee).
Data Set |
Measure |
Count |
Percentage |
Selected Data |
Reliability |
181 (181) |
100.00% |
Selected Data |
Absolute Accuracy (Y) |
140 (181) |
77.35% |
Selected Data |
Relative Accuracy (Y) |
140 (181) |
77.35% |
All Data |
Reliability |
7,509 (7,509) |
100.00% |
The USDOT number parameter was 100 percent reliable, but only 71 percent accurate. In many instances this was due to the system triggering (i.e., capturing the image of the side of the vehicle) too early and missing the information. This was particularly acute for vehicles with long-cabs.
The reliability and accuracy of the combination of these three parameters is lower than any one parameter taken separately since if just one of these three parameters is not assessed correctly by the system, their combination is deemed unreliable or inaccurate. While the reliability was just below 90 percent, the accuracy of the combination could be as low as 37 percent (absolute) 42 percent (relative), as shown in table 14. Again, nighttime rainy conditions were the worst for system reliability. However, for accuracy the worst case was nighttime clear.
Table 14. License plate information and U.S. Department of Transportation number reliability and accuracy (Unicoi County, Tennessee).
Data Set |
Measure |
Count |
Percentage |
Selected Data |
Reliability |
161 (181) |
88.95% |
Selected Data |
Absolute Accuracy (Y*+Y) |
88 (181) |
48.62% |
Selected Data |
Relative Accuracy (Y*+Y) |
88 (161) |
54.66% |
Selected Data |
Absolute Accuracy (Y) |
67 (181) |
37.02% |
Selected Data |
Relative Accuracy (Y) |
67 (161) |
41.61% |
Note: Y*= 0.
The system deployed at the Unicoi County site showed a 93 percent reliability level at assessing the number of axles that a vehicle had. The accuracy was slightly lower (as low as 91.2 percent), with the worst outdoor conditions for both reliability and accuracy being daytime cloudy conditions.
To analyze the accuracy of the system at determining vehicle weight, the Tennessee Highway Patrol collaborated with ORNL researchers to measure total vehicle weight, as well as axle weight and axle spacing, for randomly selected vehicles that entered the Unicoi County VWS. During the period of June 2016 to September 2016, THP officers manually inspected 149 vehicles that were also inspected by the deployed system. After a first review of the data collected, 69 observations were discarded, either because there was incomplete information provided in the electronic forms or the information in the database of the system was deleted before ORNL could retrieve it (i.e., the system automatically deletes the information when it is 10 days old).
Most observations have a gross vehicle weight (GVW) error ranging from -5% to +10%. However, there are several outliers, some of them significantly high (as high as -87%). Also, most of these outliers are on the negative side of the graph, indicating that in these cases the system underestimated the gross vehicle weight, although there were a couple of cases where the error was positive and larger than 10 percent. The conclusion was that, under the assumption that the portable scales used by the THP personnel were error free, the system, on average, did not measure weight correctly.
Conclusion
For both sites, the reliability of the License Jurisdiction parameter was relatively high. However, its accuracy was very low.
In the case of the License Plate Number parameter, its reliability was above 99 percent at both sites (100 percent at Unicoi and 99 percent at Laurel County, where in some cases the license plate was not detected because it was placed on the ventilation grid and not on the front bumper of the vehicle as the system was expecting). The accuracy, however, was much lower. Overall it was 77.4 percent (70 percent during sunny days, worst case) at Unicoi County and 68 percent (worst case 55 percent during the nighttime) at Laurel County.
The USDOT number parameter was 100 percent reliable at the Unicoi site, but only 79 percent at Laurel County. The accuracy of this parameter was low at both sites (71 percent and 59 percent). In many instances this was due to the system triggering (i.e., capturing the image of the side of the vehicle) too early and missing the information.
Some applications of the system studied in this report require matching of the same vehicle at two different sites. The identification of a vehicle would require that the license plate jurisdiction and number, and the USDOT number be accurate. ORNL analyzed this condition. As expected, the reliability and accuracy of the combination of these three parameters is lower than any one parameter taken separately. At the Unicoi County site the reliability was just below 90 percent and it was 70 percent at the Laurel County site. The accuracy of the combination was as low as 30 percent Laurel County, and slightly better at Unicoi County, reaching 37 percent.
The system deployed at Unicoi County showed a 93 percent reliability level at assessing the number of axles that a vehicle had. This parameter had a reliability of 97 percent at Laurel County. The accuracy was about 93 percent at both sites.
To analyze the accuracy of the system at determining vehicle weight, THP collaborated with ORNL researchers to measure total vehicle weight, as well as axle weight and axle spacing, for randomly selected vehicles that entered the Unicoi County VWS. The analysis showed that for GVW the average weight error (system weight minus measured weight as a proportion of the measured weight) was different from zero. That is, the system was bias, and overestimated weight. When each axle weight was evaluated individually, axle 1 presented a strong positive bias (weight was overestimated) when compared to the other axles. Because of this bias, in 90 percent of the cases that were at the vehicle overweight boundary, the system labeled the vehicle as being overweight when it was not, and in 10 percent of the cases it did not flag the vehicle as being overweight when it was.
Regarding axle spacing, the system always provided an additional measurement as if an additional axle existed (e.g., for a five-axle vehicle, the system will provide six spacing distances). When the data was compared with the THP field measurements, the axle 1-2 spacing and axle 3-4 spacing presented average errors (system spacing minus measured spacing and a proportion of the measured spacing) that were different from 0, thus showing a (positive) bias.
In conclusion, the system showed a low accuracy to be used for re-identifying vehicles (i.e., vehicles that are identified at one site with the technology and then identified and matched at another site sometime later). Although the weigh-in-motion component of the system seems to be calibrated within the normal tolerances for these types of devices, axle 1 weight appears to require a different calibration factor than the rest of the axles. This is in an indication that the algorithm used to assign axle weight may need to be revised. Also, if the tested system is used to identify overweight vehicles, a considerable number of false alarms could be expected. And in some cases (10 percent in this analysis), vehicles that are overweight will not be identified as such.
ORNL Recommendations and Response
At the completion of the analysis, ORNL made five recommendations to improve operations at the sites. Those recommendations, as well as responses from the project team, are found below.
- Recommendation 1—The image-capturing sub-system triggers too early or does not capture the entire side of a CMV cabin. This was especially pronounced with long cabs, where in many instances the deployed technology is unable to find the USDOT number because the image is not complete. It is recommended that a larger portion of the side of the CMV cabin be captured.
- Response 1—The project team currently implements multiple image capture for our Automated USDOT Reader (AUR), as proposed in the recommendation. When a USDOT number is successfully decoded, the details page will display the image with the decode. If the AUR is not able to decode a USDOT number from the captured images, then the first image from the sequence is displayed. This may have given the researchers a false impression on early triggering. With coordination, the team can enable image capture, and make these image sequences available to ORNL for analysis.
- Recommendation 2—In many cases the license plate is not located on the front bumper of a CMV; instead, it is placed somewhere on the grid. The system already has built-in functions that allow it to look for the USDOT number anywhere on the side of the cabin. It is recommended that these functions be used to locate the vehicle license plate, especially if it is not located where it is expected to be.
- Response 2—The cameras supplied currently employ this feature. The project team has realigned the Automated License Plate Reader (ALPR) camera at Laurel County, KY to better capture the vehicles as they travel through the model site.
- Recommendation 3—The algorithm used to identify the license plate jurisdiction does not appear to be consistent and/or precise. It is recommended that this algorithm be revised and improved. It is acknowledged that there is a wide variety of license plate layouts and improving this algorithm may be challenging. However, if the technology is to be used to identify vehicles, then the license plate jurisdiction needs to be determined with a higher accuracy than what the deployed system showed.
- Response 3—The project team agrees that relying on the camera jurisdiction reading only is not ideal and is prone to a misidentification of the jurisdiction. This is a persistent problem with most license plate algorithms for jurisdiction. The team has implemented a more accurate method of determining the jurisdiction by simply querying the SAFER database with the license plate number and using that jurisdiction. This provides an error rate of less than 2 percent compared to a much higher error rate from the ALPR only.
- Recommendation 4—Although the weigh-in-motion device appears to be calibrated within the tolerances commonly used for those devices, it seems that axle 1 weight presents problems (i.e., its calibration factor is different from the axles calibration factors). It is recommended that the algorithm assigning weight data be revised, especially with regards to axle 1.
- Response 4—Upon completion of the installation, the project team performed an ASTM 1318 WIM calibration. The project team has since performed additional WIM calibrations, including a front axle weight correction which is commonly used in WIM systems. We have used the data provided by ORNL to adjust the first axle to improve the WIM performance.
- Recommendation 5—The system identified an additional axle-spacing measurement after the last axle of CMV vehicles. It is recommended that the algorithm assigning axle spacing be revised and adjusted to consider the number of axles identified for a given vehicle.
- Response 5—The project team logged into both Laurel County and Unicoi County sites and was unable to identify the problem being described.
Following the separate and independent ORNL review, the project team was given an opportunity to re-run some of their performance tests to potentially record a higher system performance. However, the project team believes that the recommendations made by ORNL showed gaps in definitions and understanding of the SRIS system operation and thus the ORNL analysis presents differing performance numbers. The project team conducted the initial proscribed performance analysis with results consistent for the site conditions and equipment deployed. The performance numbers officially collected by the team meet the project goals and expectation of the system and continue to do so in subsequent monitoring. The team is pleased with the results of this report and does not feel that a re-run is necessary.