Yesterday, Google pivoted on self-driving cars. Yes, I know we don’t normally describe established firms as ‘pivoting’ but the notion applies when the same core idea (in this case, autonomous vehicles) is reapplied to a different customer set. For the self-driving car, this moved from retrofitting existing cars to Google building its own new and slower vehicle with the customers being, not tech-savvy lead users, but instead those who have trouble with cars now; namely, the elderly and those with disabilities that make independent driving difficult or impossible.
The new direction is fascinating for all manner of reasons but what I want to focus on here are the regulatory issues. There is a theme out there (and by ‘out there’ I mean amongst people commenting on technology and economics on the Internet), that regulators are standing in the way of innovation. This is obvious when it comes to things like Uber but it is a constant theme in conversations regarding self-driving cars. In particular, will regulators approve them and who will sort out the liability issues from all this.
Megan McArdle recommended just killing lots of lawyers to make driverless cars a reality.
Even if truly driverless cars reduce accidents, they will probably increase overall liability, because a legal claim against a car company is worth a lot more than a legal claim against your typical middle-class person with a handful of assets that are protected in bankruptcy and a few thousand in the checking account. You can fix the liability problem by requiring drivers to stay attentive at the wheel at all times (which shifts the liability to them in the event of an accident), but what’s the point of a driverless car that I have to pseudo-drive? Sitting at the wheel of a driverless car and not driving it sounds even more boring than actually piloting my automobile through Washington traffic.
Her proposal was to limit the liability of car companies but I must admit that seems like a bad idea. The one thing I’d like to see is that car companies have the optimal incentives to ensure safety and not something suboptimal. To be sure, potential damages may be well above their socially optimal level now but it is not obvious where they should be set to get the balance right. That is, as McArdle notes, slowing down cars to golf carts speeds of 25 mph as Google appears to be doing, does change the picture as it reduces the probability of accidents substantially. In which case, it seems that the liability system is working as intended. It isn’t stopping driverless cars but promoting their development in a healthy direction.
For this reason I have started to wonder whether the ‘regulation kills innovation’ theme is far more nuanced than many have been thinking. Why do we think that regulation will hold back driverless cars rather than actually promote them?
When regulation gets in the way, it often does so because there is an incumbent group who might be harmed from new innovation and so use new or existing regulations to stop them. This is the case with Uber although many argued that this was the case with Segway too. To be sure, politically entrenched groups can be a barrier but what counters them is the emergence of groups to push for the innovation. That’s Uber’s plan. Make sure enough people love Uber to politically counter the entrenched groups. For Segway, they didn’t get that counter-movement because there wasn’t a group that loved the Segway enough.
For the driverless car, Google have moved to recruit a very politically important set of people — those over 65 and their children. If that group loves driverless cars, then politicians are going to be loath to stand in their way.
But there is more to the picture than that. Google believes in data. By putting driverless cars in the most dangerous driving situations first — freeway driving with an urban mix — they were also putting those cars in precisely the situations were the liability issues are most difficult. Instead, focussing on local driving for short distances at slow speeds with a group not known for their excellent driving skills and, all of a sudden, you have a safe use case. Google will be able to trial this in an area — probably by literally giving the cars away — and, in the process, will gather data showing that driverless cars are, indeed, safer in that environment. In so doing, they will arm their new constituency with data.
We could go further. If it is safer, then insurance companies and others are going to switch to be promoters of those cars. Discounts will be applied to those who have them and, eventually, penalties will be applied for those who don’t. Insurance rates will fall in localities that have driverless cars and a virtuous circle will be created. To be sure, ambulance chasing lawyers will start to have a problem although, to be fair, they will now have a driverless car to chase ambulances for them; making it a wash.
Consequently, while we can’t know for sure, it would not surprise me if regulators and politicians turn out to be Google’s friend in the driverless car business. There are certainly enough reasons here to suppose that we shouldn’t just default to the notion that they are the enemy.