Self-Driving Cars: Just Be Reasonable

I somehow haven’t found the time to write a blog post about this article, so here’s at least a clip of the super original argument that the author makes. I’ve gone in and emphasized the pieces that I want to make sure you take note of.

But really, go read the article, it’s worth it.

Self-Driving Cars Will Be Ready Before Our Laws Are [IEEE, Nathan A. Greenblatt, January 19, 2016]

02-driverless-car-law-master-1452869125230
“Putting autonomous vehicles on the road isn’t just a matter of fine-tuning the technology.”

You Wouldn’t Want To Be Called “Defective”

The solution to the lawsuit problem is actually pretty simple. To level the playing field between human drivers and computer drivers, we should simply treat them equally. Instead of applying design-defect laws to computer drivers, use ordinary negligence laws. (emphasis added) That is, a computer driver should be held liable only if a human driver who took the same actions in the same circumstances would be held liable. The circumstances include the position and velocity of the vehicles, weather conditions, and so on. The “mind” of the computer driver need not be examined any more than a human’s mind should be. The robo-driver’s private “thoughts” (in the form of computer code) need not be parsed. Only its conduct need be considered. (emphasis added).

That approach follows basic principles of negligence law. As Dobbs’s Law of Torts (2nd ed.) explains: “A bad state of mind is neither necessary nor sufficient to show negligence; conduct is everything. (emphasis added). One who drives at a dangerous speed is negligent even if he is not aware of his speed and is using his best efforts to drive carefully. (emphasis added). Conversely, a person who drives without the slightest care for the safety of others is not negligent unless he drives in some way that is unreasonably risky. State of mind, including knowledge and belief, may motivate or shape conduct, but it is not in itself an actionable tort”—that is, wrongful conduct.

For example, a computer driver that runs a red light and causes an accident would be found liable. Damages imposed on the carmaker (which is responsible for the computer driver’s actions) would be equal to the damages that would be imposed on a human driver. (emphasis added). Litigation costs would be similar, and the high costs of a design-defect suit could be avoided. The carmaker would still have a financial incentive to improve safety. In fact, the manufacturer would have greater incentives than with a human-driven vehicle, because of publicity concerns. Correction of systemic problems could be implemented via a predictable mechanism, such as a mandatory crash-review program with government oversight, without excessive risk to the manufacturer.”

Drive It Home

02-driverless-car-human-1452870275052
“50 years from now, in a world with no traffic accidents, people will look back and conclude that human drivers were a design defect.” Someone better better make sure that the self-driving cars’ source code include the Three Laws of Robotics

And then here’s a piece (also a valid part of the argument) that I’ll toss in because it’s cute/drives home the point almost too much (pun intended):

“Computer drivers can take far more rigorous driver tests than the 20-minute road tests offered by U.S. departments of motor vehicles today. Recorded or virtual information could test a computer driver’s ability to safely drive, say, a million mile before handing over a license.” (emphasis added).

Debug This!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s