On my way to the hospital yesterday morning, I came across the stop light that was not working. It was blinking red at all four stops, and cars were paused, waiting their turn. It was worse than a stop sign because there was a dedicated left turn lane. People hesitated, waved others through, edged out to let other drivers know they through it was their turn. We all managed, with a combination of gestures, car maneuvers, and eye contact to communicate our way through an unexpected obstacle. So why mention this? Because I think that technologists are underestimating how social a problem driving actually is.
I am reading AI Snake Oil (review will be coming in a couple of weeks) and this attitude jumped out at me. The authors think that auto-driving is an eventually solvable problem in a way that certain prediction problems cannot be. They believe that such “social problems”, problems where the interactions of human beings are at their core, cannot be solved by AI since there are too many nuances, too many edge cases, too many moral issues with shaping people’s lives via flawed algorithms. And I agree with that — one of the reasons I write this newsletter is that so many people take Silicon Vallet bullshit at face value in areas that are too important to leave to, well, bullshit artists. But I also think that tech people underestimate the degree to which driving is a social activity.
Everybody who has driven for any appreciable amount of time has run into the situation I outlined above. There are innumerable similar circumstances. Who should go when a UPS truck is blocking the lane? How do you know when it’s okay to pass a wobbling bicyclist on a winding road? How do you maneuver when the road has water on it. Who merges in what order when there is an emergency lane closure? The rules of the road breakdown and humans have to pick up the slack. They have to communicate, to become social creatures again.
Driving fits the author’s definition of a social problem as well. It is morally dubious to leave the health and safety of other drivers to the flawed algorithms of self-driving cars. And we have already seen that there are many, many times when nuance and edge cases dominant the driving experience. I can imagine a world in which that is not the case, but that would require remodeling our entire road system to be friendly to self-driving cars. And even then, accidents and acts of God would still produce situations where the rules break down. Frankly, if we are going to rebuild our roads to serve machines, buses, light rail, trollies, and trains would likely be a more efficient choice in most places.
I have a lot of problem with self-driving cars. I do not believe that we have had a serious conversation about the fact that their training is happening amongst millions of people who have not consented to be part of their experiments and who have never had a clear explanation of the risks. I am concerned about how the economics of self-driving cars would affect disabled people. Many people need to help getting in and out of cars and/or handing their luggage. Who is going to do that if we fire all the drivers and replace them with automated cabs? But the largest concern I have is that too many involved seem, at least publicly, to not understand that driving is not a rules-based endeavor. It is a form of communication, or, at least, it can be. Those times when it is are the most dangerous, most social, times behind the wheel.
Too often, technologists think of problems as technology problems. Most of the time, the problems are not technological problems. They are political and social problems. Trying to reduce them to ones and zeros just leads to mistakes and poorly designed “solutions” that can do more harm than good. Bluntly, anyone who believes technology issues that touch human beings is not a social problem probably should not be trusted anywhere near the problem.
The idea that things like self-driving cars are solvable problems because they are not deeply social is bullshit. And we deserve better than bullshit from people who are experimenting in human spaces.