Guest Op-Ed: Do We Need to Upgrade from “Informed Consent” to “Informed Risk” with Autonomous Vehicles?

Recent events, including the testimony of Facebook’s CEO on Capitol Hill discussing how consumer provided data is used for profit, the significant disruption to City of Atlanta services from a ransomware attack currently estimated to have economic impacts of near $3 million, and the first known fatality involving a self-driving vehicle, demonstrate the fascinating and eye-opening time we are living in, particularly when it comes to the continued integration of technology into our lives.

New innovations are quickly making their way into every aspect of our being (and commute) – from personal digital assistants like “Alexa” and “Siri,” to apps that bring services literally to our fingertips, to the ongoing deployment of self-driving vehicles. With the benefits, most tangibly in the form of convenience, that come from new technologies being rolled out what seems like daily, also come risks that are often little understood, or more accurately ignored.

When using technology, it appears ignorance is indeed bliss. One of the reasons is the infamous “Terms of Use.” Not only is length a deterrent to reading such terms—cutting and pasting the Terms of Use for Uber creates 12 pages of stimulating legalese with LimeBike creating 16 pages, plus separate terms governing privacy and use of our data as riders—but knowing that you have little (or more realistically no) negotiating power provides little incentive to better understand what you are agreeing to by just quickly clicking “accept” for access to apps like Facebook or Twitter that let you communicate with millions of people rather effortlessly or Lyft and Uber that provide on-demand mobility.

Normally, when entering into an agreement that involves rights being waived, one would argue that “informed consent” is needed in order for a binding agreement to be created. But, in this day and age, when few actually read the Terms of Use (and, even fewer when such Terms of Use are updated), there does not appear to be a meeting of the minds when it comes to a consumer truly understanding the conditions that come with the convenience of a technology like on-demand mobility or dockless bikeshare.

The danger that consumers are not clearly consenting to conditions they have read and understand is a potential impediment to the continued adoption of new and larger innovations like self-driving vehicles, which are anticipated to collect large amounts of personal data from a “driver/operator/rider” (we are still working on those definitions). As we are seeing with the ongoing discussion around Facebook and its use of data for profit (disturbingly, to the surprise of many members of Congress), not clearly disclosing risks can lead to the boycotting of an innovation and a sentiment of mistrust towards technology companies.

For autonomous vehicles, the issue of transparency and informed consent is playing out through the recent fatality involving an Uber autonomous vehicle and at least 2 deaths to date from cars with Tesla “autopilot” engaged. Not to mention, the most recent self-driving vehicle accident involving Waymo. These incidents call into question the obligation of companies to accurately disclose to consumers the true capabilities of a vehicle being touted as “autonomous” – does the vehicle require a person to be ready to take back control of a vehicle if the autonomous system disengages or fails, or can a person enjoy a fully automated experience with the vehicle truly being able to monitor and engage in the complete operation of a vehicle? This distinction will also be an important part of determining future liability for accidents involving autonomous vehicles.

If mistrust grows around the use of data by companies or potential claims from an injury or death from using a private fleet operated self-driving vehicle is found to be “unknowingly” limited, we may experience a slowed adoption of new innovations or worse yet, such technologies that offer potential societal benefits, such as enhanced mobility for underserved communities, not coming to fruition.

A potential solution is a movement by companies towards Terms of Use that clearly and succinctly inform consumers upfront (rather than buried in the middle of cumbersome terms that few read, even if capitalized) of any rights being waived, how data will be collected and used, and any potential risks from a technology (similar to requirements around the advertisement of pharmaceutical drugs). A welcoming example is seen in the new “Data & Privacy” screen that shows up with the new Apple iPhone update; however, a consumer still needs to click to “[s]ee how your data is managed…”

While many will quickly discount an informed risk approach to technology as burdensome, cumbersome and an unneeded impediment to innovation, hitting “refresh” on the current Terms of Use model allows consumers to better understand and weigh any risks with the benefits of using a technology, which promotes more trust and greater long-term consumer adoption, and likely more loyalty (and less legal exposure for data focused companies) when we have our next “Facebook moment” and navigating future speedbumps expected with a new, exciting and evolving technology like self-driving vehicles.

Gregory Rodriguez is of counsel with Best Best & Krieger LLP. Based in the firm’s Washington D.C. office, he provides strategic information, policy insight and legal assistance to plan for and incorporate emerging transportation technologies like automated vehicles into communities. He can be followed via Twitter @smartertranspo.

The views expressed above are those of the author and do not necessarily reflect the views of the Eno Center for Transportation.

Search Eno Transportation Weekly

Latest Issues

Happening on the Hill