As Google and others make headway into a world where self-driving cars render human reaction unnecessary, it’s dawning on experts that the ethics involved in programming these cars to avoid crashes are complicated. Whereas drivers can adapt to split-second decisions, cars equipped with software may not.
This grey area is being studies by a cadre of academics and technologists, which includes University of Southern California professor Jeffrey Miller, who is developing software for self-driving cars, according to the Associated Press.
Google is leading the charge in developing the cars, testing them around the world. However, the Mountain View company is only focused on “most common driving scenarios” to generally avoid crashes and hasn’t studied the nuances involved in crashes, according to Ron Medford, the director of safety for Google’s self-driving car project.
The moral implications of the driverless cars’ programming is a highly philosophical issue so far. Out of major automakers, only BMW has convened a group to study the implcations. The company’s Silicon Valley office is looking at the intersection of technology, ethics, social impact and the law, the AP reported.
The issue has yet to reach most state governments and its far from the radar of the federal government. Only four states have passed legislation about self-driving cars on public roads.
“This is one of the most profoundly serious decisions we can make,” Patrick Lin, an ethics professor at Cal Poly San Luis Obispo, told AP. “Program a machine that can foreseeably lead to someone’s death. When we make programming decisions, we expect those to be as right as we can be.”