You remember HAL, the fictional character/computer in 2001: A Space Odyssey. She/He/It controlled the systems of the Discovery I spacecraft. Well, HAL is about to come soon to an automobile near you.
The recent crash and death of an owner of a Tesla Model S using the Autopilot System has raised a lot of issues and alarms about what is happening with Autonomous Vehicles (AV). We have been moving gradually towards smart cars for some time. Collision warning systems, blind spot information and sophisticated parking assistance are now included in many models.
But now things are about to move to the next level, literally. Level 2, as it is called, means a car that can take control of most functions for a few seconds but where the driver must be ready to resume command immediately. Level 3 is a vehicle which can drive on its own in specific circumstances like a highway, but where human drivers have to “be available for occasional control within a comfortable transition time”. And then there is Level 4, which is take a nap it doesn’t matter, the driving is handled by HAL.
The problem is we are now in that Twilight Zone where Autopilots can take control, but we are not quite sure where the technology “blind spots” are. Virginia Tech researchers have found that it takes drivers of Level 3 cars an average of 17 seconds to respond to ‘take over’ requests. If a car is moving 65 mph it will travel 1,621 feet, more than a quarter of a mile, before the human takes back control. A lot can happen in 400 yards. Proponents of AV argue that technology is much safer than when humans are at the controls. Traffic accidents cause about 33,000 deaths a year in the U.S. today. Some analysts estimate this will fall by half with AVs.
For us the interesting issue revolves around ethics and morality. How do you program a machine for ethical decisions? For instance, your AV (Level 4) is suddenly about to hit a motorcyclist which will almost certainly kill the rider. The alternative is to swerve to the right and run into a stone wall. Swerving to the right will cause much more damage to you, the passenger/driver, than hitting the motorcyclist. How do you program the machine? To spare you or spare others?
This is a version of the “trolley” problem first introduced by Philippa Foot in 1967. In this Catch-22 there is a trolley travelling out of control down the track at high speed. It will almost certainly hit and kill five people working on the line. You have the ability to pull a switch and have the trolley transfer to a different line where there is only a single person working, though she too will be killed. What do you do? And what do you do if the person on the alternate track is a relative of yours? (Glad I wasn’t a Psychology major and had to tackle this on the exam).
The computer HAL in 2001: A Space Odyssey (the movie was released in 1968) probably seemed pretty far off and very Sci-Fi at the time. But Artificial Intelligence has gotten a lot better over the years (see chart above) and Autonomous Vehicles are now literally around the corner. But the ethical questions raised have not gotten any easier.