The Ethics of Self-Driving Vehicles: Accountability and Trolley Problem in America 

Table of Content

Introducing the Predicament

Self-driving vehicles are forms of transportation that use sensors and environmental data to navigate routes without the presence of a human driver. According to an article written by Jon Walker (2019), manager of Rocky Mountain Institute (RMI), Ford expects to have manufactured a self-driving car by 2021. With the technology revolution, self-driving cars are becoming less of a fantasy, and more of a reality. The relative success of self-driving cars has been vehemently debated from the perspectives of both engineers and philosophers. According to an article written by Chris Isidore (2018), CNN technology reporter, more than 90% of traffic fatalities were caused by human error. Many engineers argue that by removing humans from the steering wheel, the number of traffic fatalities would be significantly lower. However, there is still great debate concerning the introduction of such cars. The ethical concerns of self-driving cars have risen. Many question the decision-making process of the vehicle in tough situations. In this paper, we will examine the morality of self-driving cars, establish engineers’ reasoning behind the importance of self-driving cars, and compare this to research by philosophers concerning the moral values of self-driving cars, relating specifically to the trolley problem and liability.

Engineering Perspective

While many may argue that the points philosophers make are quite important, engineers believe that their design’s benefits outweigh the moral conflicts. According to Daniel Howard (2013), department of city planning at the University of California, Berkeley, self-driving cars are more convenient. Howard asked eighty students what the top three most attractive features of self-driving cars are in their opinions. More than twenty people said “convenience” as their top choice. This idea of convenience is prevalent in Peter Burns’ (1999), Department of Civil and Environmental Engineering and Earth Sciences, study to determine the trend of mobility when people get older. He had participants fill out a 160 multiple-choice based questionnaire, and found that participants 35-45 years of age drove approximately 200 miles per week, whereas participants 75 years and older, drove less than 100 miles per week (1999). According to these statistics, people drive less frequently with increasing age. By introducing self-driving cars, engineers like Peter Burns (1999) and Daniel Howard (2013) believe that self-driving cars will bridge the mobility gap for the elderly. According to J.F. Coughlin (2014), Engineering Systems Division at MIT, the elderly drive less because of visual impairments. Research suggests that 85-95% of sensory cues when driving are visual. Through the process of aging, one’s eyesight slowly worsens and makes it difficult to drive (2014). Coughlin addresses another sensory function affected by age: muscular strength. As one grows older, the probability one has arthritis is greater. He notes that in the US, more than ½ of the population over 75 years old have arthritis (2014). Arthritis severely limits one’s ability to maneuver large vehicles. With self-drivings cars, elderly people can get around more conveniently.

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

The Ethical Concerns of Self-Driving Cars

A vast majority of research shows that self-driving cars are the new foci of the old trolley problem. In situations where a deadly collision is inevitable, problems arise concerning the decision-making of self-driving cars. Richard Mancuso (2016), a professor at Claremont McKenna College, addressed the trolley problem in the context of self-driving cars in his paper. One scenario of an inevitable collision would occur when crates fall onto the road. The car could (a) drive into the crates and kill the passenger; (b) swerve right and kill a motorcyclist or; (c) swerve left and kill multiple passengers in the car (2016). Instead of a human taking control and making these life or death decisions, the car will. The owner of the self-driving car would have no say in their decision because their car would decide in the given scenario. An article by Jean-Francois Bonnefon (2015), a Research Director at the French Centre National de la Recherche Scientifique, addresses the utilitarian approach- the option with the least number of fatalities (this could mean killing the passenger of the vehicle in certain scenarios). By essentially programming the car to kill the passenger in cases where they would be the least number of people killed, people might be discouraged from buying that car. While some may decide that killing themselves in this situation would be the ideal option, others would differ. Bonnefon conducted a study concerning the majority opinion in these tough situations. He found that the majority of people thought that the utilitarian approach was good for it minimized the number of deaths, but not when the owner had to sacrifice their own life (2015). In the aforementioned article by Richard Mancuso (2016), he analyzes the basis on which self-driving cars should make their decision. If there is a 50% chance of a cyclist dying, while there is a 100% chance of the owner dying, should the self-driving car perform this analysis to essentially decide who is killed? Throughout studies, philosophers have debated the different ways self-driving cars should make these life or death decisions. Many philosophers’ views on what should happen in these tough scenarios differ, and cannot be made universal by an engineer of self-driving cars.

Philosophers also examine the morality of self-driving cars in relation to the accountability. In instances where death is inescapable, the next moral step is deciding who should be subjected to the criminal punishment associated with that death. Many philosophers question if a robot can be charged with a criminal punishment like a person. This ultimately brings us back to the original fundamental question: what defines a human being? According to John Locke (1689), “such rules to be believed in by those who can’t conceive how anything except a free agent can be capable of a law. Upon that ground, those who can’t reconcile morality with mechanism… must necessarily reject all principles of virtue”. Although this dissertation was written centuries ago by an author who had no knowledge whatsoever of a world with self-driving cars, his definition of a human still applies. According to John Locke’s definition of a human, self-driving cars do not fall under the category of human for they have no morals, and therefore cannot be charged with a criminal punishment. Alexander Helveke (2015), Department of philosophy at Ludwig-Maximilians-Universität München, discusses the most feasible approaches to this accountability problem: holding the manufacturer accountable for the errors present in the vehicle. The manufacturers probably knew or should have known of the problem before selling the defected model. However, if the manufacturers are constantly liable for all errors, then this would impede any further attempts to strengthen their product. There has to be a way for someone to be accountable for the accidents without negatively impacting any beneficial changes to the model. This presents the moral question: should we be determining liability in such a way that it ensures the promotion of self-driving cars? While many argue that the manufacturers should be liable, this would adversely impact any further modifications to the self-driving car model.

Solving the Predicament

By customizing the decision-making algorithms of self-driving cars, not only will there be an increase in buyers, but there will be solutions to the moral concerns expressed by philosophers. While many may argue that self-driving cars are advantageous because they are more convenient, many others believe that safety must come first. In order to solve this safety problem, Andrea Renda (2018), Senior Research Fellow at CEPS, came up with a valid solution, that engineers should implement state-of-the-art adaptive technology that help the self-driving cars make decisions in different environments. By having these adaptive technologies, self-driving cars can make the correct decisions in those specific environments. For instance, the car should be more aware and responsive to pedestrians in a city environment versus a rural one. By specializing the car to make decisions based on location, this will greatly positively impact the moral concerns brought up by philosophers.

Cite this page

The Ethics of Self-Driving Vehicles: Accountability and Trolley Problem in America . (2022, Feb 07). Retrieved from

https://graduateway.com/the-ethics-of-self-driving-vehicles-accountability-and-trolley-problem-in-america/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront