The Trolley Problem and Self-Driving Vehicles – Article by B.J. Murphy
B.J. Murphy
One of the most popular discussions in the field of technology today is that of self-driving vehicles. It’s a topic that brings up both optimistic joy and pessimistic fear, from the elimination of car-related fatalities to the elimination of millions of jobs. I usually stand on the optimistic side of the argument, but I also understand the fear.
After all:
- According to the Bureau of Labor Statistics (BLS), there were nearly 1.8 million heavy-truck and tractor-trailer drivers in 2014, with a 5% increase per year. Meaning, there are likely around 2 million of these drivers today.
- There were around 1.33 million delivery truck drivers in 2014, with a 4% increase per year. Meaning, there are over 1.4 million today.
- There were around 233,700 taxi drivers and chauffeurs in 2014, with a 13% increase per year. Meaning, there are nearly 300,000 of these drivers today.
In other words, with the full mobilization of self-driving vehicles, we’re looking at around (+/-) 4 million jobs being automated in the next few years, thus no longer requiring human labor. This particular risk, however, isn’t what I’m currently focused on. The main focus of this article is on what is known as the “trolley problem” – a thought experiment in ethics that has since been rehashed to serve as “criticism” towards self-driving vehicles.
Let me first explain what the “trolley problem” entails, and then I’ll proceed in explaining how it’s being used today. The “trolley problem” goes like this:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
- Do nothing, and the trolley kills the five people on the main track.
- Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical choice?
You’ve likely heard this question under various different incarnations, e.g. the so-called “psychopath problem,” whereby you have a choice of either pushing a fat man off a bridge and using his weight to stop a car barreling towards a group of children or let the fat man live and thus the children dying as a result of your inaction.
Today, however, the latest incarnation of the “trolley problem” is targeting self-driving vehicles. The hypothetical scenario goes something like this:
You’re riding inside of a self-driving vehicle on a busy road or highway. Ahead, there’s someone trying to walk across the road. The self-driving vehicle becomes aware of the person. However, the vehicle is left with only two possible decisions, of which its answer will ultimately determine who lives and who dies:
- Do nothing, thus the vehicle kills the person crossing the road, but keeps you (the passenger) safe; or
- Take measures to avoid the person crossing the road, killing you (the passenger) instead.
Should a self-driving vehicle be given the power to make such an ethical choice?
This hypothetical scenario is usually argued to serve as a baseline for why we shouldn’t trust self-driving vehicles, no matter what positive benefits will arise from their proliferation. The problem I have with this scenario, however, and thus my problem with using such a simplistic philosophical argument such as the “trolley problem,” is that it completely ignores the complexity of what a self-driving transportation system actually entails in terms of both safety and efficiency. If anything, self-driving vehicles are the perfect solution to the “trolley problem.”
The only way this hypothetical scenario would make sense is if we were to limit the number of vehicles operating on our roads to be equipped with full self-driving capabilities. Thus the problem for using such an argument as a means of justifying our limited use of these vehicles. In truth, the self-driving industry isn’t aiming for limited access; quite the contrary! The end goal is in establishing full self-driving capabilities in every single vehicle operating on the road.
Why is this important? Because it would maximize safety and exponentially decrease the percentage of likelihood that any vehicle was to end up in any sort of accident. A good example of how this would work would be a group of magnets placed on a table. When placed on the right side, no matter how hard you try to connect the group of magnets together, no matter how quick you try connecting them, every single magnet reassembles itself to accommodate the new change in direction of the magnet you’re moving.
Of course, then a person might argue, well, if the table is a symbolic road, then what happens when one of the reassembling magnets falls off the table (ergo, fall off a cliff or into a ditch)? The problem with this question is that it ignores the fact that magnets aren’t intelligent. They’re not programmed to ensure maximum safety; self-driving vehicles would be.
Let’s try a better example instead: rather than magnets, think of electrons. In accordance to quantum physics – specifically the Pauli exclusion principle – no two identical electrons shall occupy the same space. In other words, whenever an electron from one object comes close to another electron from a separate object (or even two electrons from a singular object), those two electrons never touch one another; they’re essentially repelled or moved away in different directions. This scientific principle is adhered to by every single electron, made up inside of every single material object, including our own bodies. As a result, although complex, order (as opposed to chaos) is successfully established.
Which then brings us back to self-driving vehicles. In this hypothetical scenario, every single vehicle operating on the road is equipped with full self-driving capabilities – i.e., radar, mapping – fully autonomous, and connected via the Internet of Things. Let’s say there’s a group of people attempting to walk across a busy highway. In response, a select few vehicles will not only detect the group of people walking across the highway, they’ll also detect every other object within their vicinity. As they begin moving towards a safe distance, or even slow down to allow the group of people to safely walk across, every other vehicle will respond accordingly to compensate for every other vehicle’s new course of action. Just like electrons, they’ll begin reassembling themselves in an intelligent manner to maximize both safety and efficiency.
As a result, no one gets harmed and no vehicle finds itself getting into an accident. Which is why we shouldn’t limit the number of self-driving vehicles operating on the road. To ensure both maximum safety and efficiency, federal regulators should help build a pathway in equipping every vehicle operating on our roads with full self-driving capabilities. In doing so, the “trolley problem” would no longer be an actual problem.
***
B.J. Murphy is the Director of Social Media of the U.S. Transhumanist Party.
4 thoughts on “The Trolley Problem and Self-Driving Vehicles – Article by B.J. Murphy”
Indeed, when more cars become self-driving it will increase safety a lot, perhaps even exponentially. Not only can the cars individually respond much faster than humans, but they will also be connected with each other. This will result in nearby cars knowing each other’s routes for the duration that they are in proximity of one another, and thus optimize the lay-out of the fleet of cars on the same stretch of road.
At this moment it is difficult for many people to imagine that cars will have the capacity to drive and make decisions.
I think the big question will be; if an accident would happen when self-driving cars are considered fit for the market, who would be responsible? Can we blame the manufacturer, or is the “driver” (who’s really a passenger) to blame? Could a network operator be to blame if the network goes down, and an accident happens as a result?
Surprisingly, Volvo was the first car manufacturer to openly state that they’ll take responsibility for any accident that is caused by any one of their driverless vehicles.
http://www.ibtimes.co.uk/volvo-we-will-be-responsible-accidents-caused-by-our-driverless-cars-1523260
I’ve seen driverless cars being tested in my area and I can attest to their efficiency as long as they aren’t surrounded by distracted, human drivers. The primary flaw of driverless cars so far is that they follow the law too strictly instead of driving flexibly, as surrounding humans tend to do. Frankly, I hadn’t thought of multiple driverless cars collaborating simultaneously to avoid threats–that seems like a highly efficient and prudent idea.
At the same time, replacing most, if not all, cars on any given road would be a costly endeavor and would require an exertion of power onto the most influential current representatives of the automobile industry. That would not only require technological and economic change, but political change as well.
Incidentally, MIT has created an online interactive game/quiz which test’s the users reaction to the problems described in this article. The user selects one of two options of a given variant of the Trolley Problem in order to provide hypothetical information to a driverless car if it had to make such a decision. It’s called Moral Machine, for those who are interested: http://moralmachine.mit.edu/
“At the same time, replacing most, if not all, cars on any given road would be a costly endeavor”
Short-term wise, yes, it would be quite costly. However, long-term wise, individual state governments (and individual car owners) would save a lot of money, considering the dramatic decrease in the number of car-related accidents. The long-term benefits would more than pay for the short-term costs.