Will regulators really hold back self-driving cars?

A prototype of a driverless car is seen in a photograph provided by Google in Mountain ViewYesterday, Google pivoted on self-driving cars. Yes, I know we don’t normally describe established firms as ‘pivoting’ but the notion applies when the same core idea (in this case, autonomous vehicles) is reapplied to a different customer set. For the self-driving car, this moved from retrofitting existing cars to Google building its own new and slower vehicle with the customers being, not tech-savvy lead users, but instead those who have trouble with cars now; namely, the elderly and those with disabilities that make independent driving difficult or impossible.

The new direction is fascinating for all manner of reasons but what I want to focus on here are the regulatory issues. There is a theme out there (and by ‘out there’ I mean amongst people commenting on technology and economics on the Internet), that regulators are standing in the way of innovation. This is obvious when it comes to things like Uber but it is a constant theme in conversations regarding self-driving cars. In particular, will regulators approve them and who will sort out the liability issues from all this.

Megan McArdle recommended just killing lots of lawyers to make driverless cars a reality.

Even if truly driverless cars reduce accidents, they will probably increase overall liability, because a legal claim against a car company is worth a lot more than a legal claim against your typical middle-class person with a handful of assets that are protected in bankruptcy and a few thousand in the checking account. You can fix the liability problem by requiring drivers to stay attentive at the wheel at all times (which shifts the liability to them in the event of an accident), but what’s the point of a driverless car that I have to pseudo-drive? Sitting at the wheel of a driverless car and not driving it sounds even more boring than actually piloting my automobile through Washington traffic.

Her proposal was to limit the liability of car companies but I must admit that seems like a bad idea. The one thing I’d like to see is that car companies have the optimal incentives to ensure safety and not something suboptimal. To be sure, potential damages may be well above their socially optimal level now but it is not obvious where they should be set to get the balance right. That is, as McArdle notes, slowing down cars to golf carts speeds of 25 mph as Google appears to be doing, does change the picture as it reduces the probability of accidents substantially. In which case, it seems that the liability system is working as intended. It isn’t stopping driverless cars but promoting their development in a healthy direction.

For this reason I have started to wonder whether the ‘regulation kills innovation’ theme is far more nuanced than many have been thinking. Why do we think that regulation will hold back driverless cars rather than actually promote them?

When regulation gets in the way, it often does so because there is an incumbent group who might be harmed from new innovation and so use new or existing regulations to stop them. This is the case with Uber although many argued that this was the case with Segway too. To be sure, politically entrenched groups can be a barrier but what counters them is the emergence of groups to push for the innovation. That’s Uber’s plan. Make sure enough people love Uber to politically counter the entrenched groups. For Segway, they didn’t get that counter-movement because there wasn’t a group that loved the Segway enough.

For the driverless car, Google have moved to recruit a very politically important set of people — those over 65 and their children. If that group loves driverless cars, then politicians are going to be loath to stand in their way.

But there is more to the picture than that. Google believes in data. By putting driverless cars in the most dangerous driving situations first — freeway driving with an urban mix — they were also putting those cars in precisely the situations were the liability issues are most difficult. Instead, focussing on local driving for short distances at slow speeds with a group not known for their excellent driving skills and, all of a sudden, you have a safe use case. Google will be able to trial this in an area — probably by literally giving the cars away — and, in the process, will gather data showing that driverless cars are, indeed, safer in that environment. In so doing, they will arm their new constituency with data.

We could go further. If it is safer, then insurance companies and others are going to switch to be promoters of those cars. Discounts will be applied to those who have them and, eventually, penalties will be applied for those who don’t. Insurance rates will fall in localities that have driverless cars and a virtuous circle will be created. To be sure, ambulance chasing lawyers will start to have a problem although, to be fair, they will now have a driverless car to chase ambulances for them; making it a wash.

Consequently, while we can’t know for sure, it would not surprise me if regulators and politicians turn out to be Google’s friend in the driverless car business. There are certainly enough reasons here to suppose that we shouldn’t just default to the notion that they are the enemy.

4 Replies to “Will regulators really hold back self-driving cars?”

  1. Megan’s point is that differential liability results causes inefficiencies and results in net harm compared to a situation of uniform liability.

    Imagine that tomorrow Google invents a self-driving car with an accident rate 1/2 the current rate, potentially saving 17,000 people/year. Due to differential liability, Google estimates it will pay 10x as much for each accident as a private driver (say $10M vs $1M).
    In this scenario, actual drivers pay $34B in liability insurance each year, which is just built in to the cost of driving. However, if Google were to market this car, it would pay $170B in liability each year.
    Presumably, $170B is far too much- to cover the liability Google would have to charge drivers far too much.

    The net result is that deaths stay at their current rate (34K/year) rather than dropping to 17k/year. Thus a technological innovation that could save lives is prevented from being sold due to increased liability associated with self-driving cars despite their being 2x as safe.

    Obviously these numbers are fictional just to demonstrate the theory, but the point is very real. We should be encouraging any sort of innovation which results in fewer accidents and deaths. Charging more for liability based on business model makes no sense as it makes innovation less likely.

  2. Actually I don’t think I am. She was arguing that relative to the expected liability of drivers, the expected liability of corporations is higher. She is correct. But what I am saying is that I am not sure the expected liability of corporations — taken on its own — is too high. When it comes to driverless cars, corporations should be liable and I want them to have good incentives. At the moment, driverless cars aren’t being developed because drivers are liable and judgment proof. That could be true but I don’t want to limit corporate liability to resolve that issue.

Leave a comment