27 aug Ethical decisions and dilemma’s by self driving cars

Researchers are trying to program self-driving cars to make split-second decisions that raise real ethical questions.

As a philosopher, I might be perhaps the last person you would expect to have a hand in designing your next car, but that’s exactly what one expert on self-driving vehicles has in mind.

Chris Gerdes, a professor at Stanford University, leads a research lab that is experimenting with sophisticated hardware and software for automated driving. But together with Patrick Lin, a professor of philosophy at Cal Poly, he is also exploring the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world.

Co-creation between philosophers and engineers

Gerdes and Lin organized a workshop at Stanford earlier this year that brought together some collegue philosophers and engineers to discuss the issue. They implemented different ethical settings in the software that controls automated vehicles and then tested the code in simulations and even in real vehicles. Such settings might, for example, tell a car to prioritize avoiding humans over avoiding parked vehicles, or not to swerve for squirrels.

Fully self-driving vehicles are still at the research stage, but automated driving technology is rapidly creeping into vehicles. Over the next couple of years, a number of carmakers plan to release vehicles capable of steering, accelerating, and braking for themselves on highways for extended periods. Some cars already feature sensors that can detect pedestrians or cyclists, and warn drivers if it seems they might hit someone.

So far, self-driving cars have been involved in very few accidents. Google’s automated cars have covered nearly a million miles of road with just a few rear-enders, and these vehicles typically deal with uncertain situations by simply stopping.

As the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.

At a recent industry event, Gerdes gave an example of one such scenario: a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.

As we see this with human eyes, one of these obstacles has a lot more value than the other. What is the car’s responsibility?

It might even be ethically preferable to put the passengers of the self-driving car at risk. If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day.

Researchers, automotive engineers, and automotive executives were called at the event to prepare to consider the ethical implications of the technology they are developing. They were not going to just go and get the ethics module, and plug it into your self-driving car. Other experts agree that there will be an important ethical dimension to the development of automated driving technology.

Ethical dilemma’s

When you ask a car to make a decision, you have an ethical dilemma. You might see something in your path, and you decide to change lanes, and as you do, something else is in that lane. So this is an ethical dilemma.

The project CityMobil2 is testing automated transit vehicles in various Italian cities. These vehicles are far simpler than the cars being developed by Google and many carmakers; they simply follow a route and brake if something gets in the way. This may make the technology easier to launch. Here there is no [ethical] problem.

Others believe the situation is a little more complicated. For example, Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles. He thinks that plenty of ethical decisions are already made in automotive engineering. Ethics, philosophy, law: all of these assumptions underpin so many decisions. If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.

Given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly. The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.

 

More information


 Interesting or inspired by this blog? Ruud Veltenaar & Associates offers lectures, keynotes and strategic workshops about trends and developments and the impact on our life and work.

You might be interested in our keynote ‘Disruptive technologies’ or ‘Shift Happens: mega trends with impact’.

Please contact us or apply for an attractive proposal. You may also follow me on Twitter.


 

Geen reactie's

Sorry, het is niet mogelijk om te reageren.