FHWA’s 2013 Research Report On Digital Billboard (CEVMS) – Seriously Flawed


Three decades after the Federal Highway Administration’s (FHWA) 1980 study about the safety impacts of roadside digital billboards recommended further research, the agency initiated that research. The results were released on December 30, 2013 (backdated by more than a year).

The outdoor advertising industry quickly announced that the study results demonstrated no adverse safety linkage between these Commercial Electronic Variable Message Signs (CEVMS) and unsafe driver visual behaviors. However, a detailed assessment that we conducted of the FHWA study has now identified numerous errors in the research, and concludes that the agency’s reported findings cannot be supported by the study results. This brief overview merely highlights some of these concerns.

Publication and Access

On December 30, 2013, FHWA placed three documents on its website: (1) a “Non-peer Review Draft Report,” (2) a “Summary of Peer Review Comments,” and (3) the “peer reviewed” final report itself. Curiously, the final report, dated September 2012, bears no FHWA report number or cover art whereas the March 2011 “non-peer review draft report,” number FHWA-HEP-11-01. Someone searching online for this study is therefore taken to the draft rather than the final report.

A Peer Reviewed Report?

The peer reviewers of the draft raised serious concerns and changes were made to the final report, presumably as a result. But the final report offers no indication that the peer reviewers were given the opportunity to review them, no less to comment on whether their concerns had been resolved. Indeed, the final (September 2012) report contains serious flaws, including some potentially introduced as a result of changes made in response to the peer review. The apparent failure to close the loop with the peer reviewers, and errors introduced after such review, raise the question: Is justified in its claim that the final report has actually been “peer reviewed?”

Can This Report Serve as Policy Guidance?

Interested parties and stakeholders nationwide and abroad anticipated that the FHWA report would help to resolve the ongoing public debate about whether CEVMS distract drivers’ attention from the driving task to a degree sufficient to cause safety concerns and, hence, justify restrictions on the “time, place, and manner” of their use on our roads. Government agencies throughout the U.S. enacted moratoriums on proposed billboard legislation pending the publication of the FHWA report.

Unfortunately, the FHWA report fails at its principal task. Rather than providing a justifiable direction for Federal policy regarding CEVMS, the report leaves State and local Governments in limbo because of decisions, errors and internal conflicts that call its findings into question.

Equipment Issues

FHWA’s methodology centered on the recording and analysis of eye glances made by volunteers who drove an instrumented vehicle on roads in two cities (Reading, PA and Richmond, VA); roads that contained a mix of digital and traditional billboards, on-premise signs, and roadway sections devoid of billboards (but not, significantly of other signs). FHWA used a relatively new system for the recording of drivers’ eye glances. Other researchers using the same system have experienced substantial difficulties with its reliability, even when used in static, controlled laboratory settings. But FHWA’s use of this equipment in a moving vehicle on open roads was unproven, the researchers were not experienced with it, and the resultant data suffered as a result. Weeks of “acceptance tests” failed to identify some of the problems that occurred once the actual study began, such as data overload that required the experimenter to “initiate new data files” while the participant driver stopped along the road.

Eye Glance Data Collection

Billboards may contain characters as large as 48” or larger. Using current signage guidelines for letter height and reading distance (1” to 30’), a 48” letter could be read by the average driver from a distance of 1440 ft., more than one-quarter mile. And, as a vehicle gets closer to a billboard, the interested driver can look at it with a head turn until the vehicle is in line with the sign. But the FHWA study eliminated from analysis any eye glances made more than 960 ft. upstream of each billboard, and any glances made as a driver got close to any billboard on the right side of the road. Worse, FHWA’s description of these eye-glance cutoff points changed between the draft and final reports, raising a key question of how many eye glances toward billboards, and at what distances, were made, but not analyzed or reported. The report has been silent on this issue.

Billboard Luminance

The “brightness” (luminance) of digital billboards is one of the two greatest causes for complaint about their distraction potential. (The other is dwell time – how quickly the message changes – but FHWA did not study this important variable). Researchers have measured the luminance of traditional and digital billboards in several jurisdictions across at least five States, and their findings are quite consistent. Given that the average nighttime luminance for digital billboards in other studies is 16 times higher than the values measured by FHWA, we must ask whether these differences were due to the agency’s measurement approach (which was quite different than the standard approach, and which puzzled the lighting experts among our peer reviewers), or the fact that the CEVMS chosen for the FHWA were simply that much less bright than typical digital billboards. To the degree that the billboards studied by FHWA were dramatically less bright than those studied elsewhere, this suggests that they would have attracted less driver attention, and over shorter distances and time.

Number of Billboards Actually Studied

Throughout the study, including after all the data had been collected, cognizant FHWA staff members gave presentations at various professional meetings. They consistently said that 5-7 CEVMSs and 5-7 traditional billboards would be studied for each of two routes in each of the two cities. Thus, the total number of billboards studied was stated to be 40-56. But in the draft report the agency reported data for only 20 CEVMS and 10 traditional billboards, and in the final report the total number of billboards studied was reduced to 4 CEVMS and 4 traditional signs in each of the two cities, a total of 16 signs. What caused the 20 CEVMS reported in the draft to get reduced to 8 in the final? Why did the agency miss the mark of announced number of studied billboards by somewhere between 40 and 72%? And why did the agency feel that no explanation was needed for these centrally critical data losses?

The draft report’s peer reviewers expressed great concern with the reported duration of eye glances, pointing out that the measured values were so brief as to be not credible. Although FHWA never said what they did to address these concerns, the final report demonstrates that they reanalyzed the eye glance data, substituting an automated data reduction system for the manual system used originally. If this reanalysis of existing data was all they did to address this issue (and this is all they reported) why did they then eliminate from analysis the data from 64% of the CEVMS studied in one city and 55% in the other? Why were the data for the majority of the CEVMSs simply purged from the final report without explanation?

Troubling Discrepancies Without Critical Report Details

The central focus of the FHWA study was on “target” billboards – both digital and traditional. Important “measured variables” included size, location, and setback from the road. Both the draft and final reports documented these characteristics in tabular form. It is of concern, therefore, that key measurements that defined these characteristics changed from the draft to the final report, when it is obvious that the actual billboards did not change. The following brief examples merely illustrate the very large problem.

  • Some billboards appeared in the final report that were not in the draft; others that were present in the draft disappeared in the final.
  • Billboard setback from the road changed dramatically from the draft to the final report.
  • At least two billboards moved from one side of the road to the other between the draft and final reports.
  • The length of “data collection zones” (DCZ) changed significantly. In fact, there is not a single agreement between the billboard DCZ measurements shown in the draft to the equivalent distances in the final report.

Since the billboards presumably did not move, and since FHWA reported that no new data was collected after the draft report was completed, either the measurements reported in the draft or those in the final, or both, were incorrect. And given that each of these four billboard attributes can have a direct impact on driver eye glance behavior, the study’s reported findings are indefensible.


A review of the FHWA documents will lead the knowledgeable reader to ask many more questions about the decisions made and the interpretations reached by the report’s authors, and ultimately, the veracity of the final report. This brief review can barely scratch the surface of our concerns. The reader is therefore referred to our full report about the FHWA study and the questions and concerns that it has raised. Our assessment has been independently peer reviewed by 14 international experts in this field, and can be found at https://www.veridiangroup.com/.

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Eno Center for Transportation.

Search Eno Transportation Weekly

Latest Issues

Happening on the Hill