Reading12: We should be scrutinizing policy, not the tech

Reading12: We should be scrutinizing policy, not the tech

Self-driving cars would be very convenient. Wouldn’t it be great to hop into an autonomous car and surf the web, play a game, read a book, or any number of other things you could do other than staring at the road on your way to your destination? More to the point, there is a lot of money to be made in providing this service, which may be the most truthful reason why it’s happening. Ultimately, though, this is all beside the point. Should we make autonomous cars?

The discussion around whether or not we should develop or allow autonomous cars ultimately focuses on safety. Over 94 percent of the tens of thousands of annual road fatalities are caused by driver error. Could autonomous vehicles alleviate most, if not all, of these? After all, they shouldn’t get tired, or distracted, or angry at other drivers.

I tend to agree – beyond thinking that autonomous cars would be extremely convenient and overall very cool, I think they could certainly make the roads much safer overall. I do think that we can reach the point where autonomous cars are considerably safer than human drivers, which would make widespread adoption a great improvement on the safety of our roads.

However, we’re certainly not there yet. I wouldn’t feel comfortable riding in a fully autonomous car at anywhere approaching highway speed; not because I don’t think that an autonomous car can be safe, but because I’m not convinced they are completely safe yet.

There is a great deal more work to do on autonomous cars. How far should we go with them? The crash in Tempe, Arizona, in which an Uber autonomous car struck and killed a pedestrian calls a lot of this into question. Should we test on public roads? Should we stop this altogether? Should a computer be allowed to make what can become life-and-death decisions?

I think that in light of the Tempe accident, we should question more the prudence of Uber in their decisions about design and testing, rather than the algorithmic capacity of the car.

I want to preface this by saying that I am not trying to rationalize away the loss of life. Any human loss of life is a tragedy. But we should not be looking at the capability of the autonomous car. It was dark, and the woman was crossing the road at a place where pedestrians would not be expected. But – and these next two facts are, I think, the crucial ones – a) the vehicle’s system attempted to initiate an emergency brake prior to the impact and b) Uber had disabled this capability in favor of a smoother ride.

The vehicle attempted to initiate an emergency brake 1.3 seconds before impact. Travelling at 39 miles per hour, this would be a distance of 74.3 feet. I can’t say with confidence that the vehicle could stop fully in that distance (stopping distance from 70 mph to stationary is ~185-190 ft for that car), but it would almost certainly have been a much less than fatal crash. The autonomous system correctly detected that it needed to perform an emergency stop. And, with a pedestrian appearing out of the darkness in the night, would we expect a human to have done better? After watching the video, the woman appeared suddenly out of the dark – 1.3 seconds is better than I’d expect most human drivers to do.

If the autonomous system was allowed to carry out the stop, the woman likely would not have died. But, Uber “had disabled the Volvo’s emergency braking mechanism, hoping to avoid a herky-jerky ride.” They cited that it was the responsibility of the human operator to intervene. The same human operator whose responsibility it is to be relaying data and working on other things during the ride.

This is grossly irresponsible. I think it’s obvious that you can’t rely on a human driver to take over for an otherwise vehicle in emergency situations. If a human is not actively driving, even the best-intentioned will get distracted, sleepy, or simply not have the focus to have the necessary split second reactions. So why disable the autonomous emergency brake? Even if there was a full-time emergency observer, why disable it? Another layer of redundancy could never hurt, and I don’t buy that avoiding a “herky-jerky” ride is enough to do so.

I don’t know how this could be regulated, but companies like Uber need to have more of a focus on safety. For everyone involved – things like disabling emergency braking should be unthinkable. More robust safety features and continued effort on these cars will make the road safer for everyone – pedestrians and drivers alike. Further, I don’t think there’s as much of a trolley problem concern as many like to posit.

These “trolley problems” almost never happen on the road. And, if they do, they are most likely the result of previous irresponsible driving that, in theory, an autonomous car should avoid. If you have to choose between running into another car at highway speeds or ramming pedestrians, couldn’t that have been alleviated by following less closely, or not speeding? There are systems – speed limits, road signage, etc – designed to make the roads safe and avoid these sorts of dangerous situations. To me, this is the more compelling challenge to autonomous cars – they rely on infrastructure that may not be there all of the time. Things like road markings or signage that may be absent, damaged, or obstructed by weather. At least so far, computers have a hard time improvising.

I’ll cut this off here – I have more thoughts on the trolley problem/ethics of autonomous cars (mostly about why people are focusing on the wrong thing) – but this is getting long. I can spell it out more in the in-class discussion.

In summary, I believe that autonomous cars are a promising possibility to make travel more convenient and safer for everyone; we’re just not there yet. We need to be more responsible in our testing and think long and hard about how we’re rolling out the technology, but we shouldn’t let poor safety decisions lead us to give up on this technology.