Whom Should a Self-driving Car Protect in a Car Accident

Table of Content

Imagine a trolley is speeding down a truck, on which five people are laying down a few kilometres away. They will die unless a lever is triggered, which would deviate the train to a side rail line, where another man is laying down. What would you do? This very common ethical question called ‘trolley problem’ encompasses the fundamental dilemma of killing five lives against one, from the point of view of an external viewer. But what if the trolley had its own brain and was able to decide for itself?

This paper focuses on the research question: “How should a self-driving car be programmed to behave in the case of an unavoidable accident in which a decision must be made between causing different damages to third parties or to itself?”.

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

Imagine the following situation: a self-driving car is approaching a pedestrian crossing, on which a man is about to walk. The vehicle detects the body and smoothly stops to let the person pass. At the same time, a man that is not paying attention to the road is driving a normal car behind the autonomous vehicle. The latter plays an important role in what is a possibly fatal situation. Indeed, having noticed the careless man behind, the self-driving car has the option of staying still and absorbing the impact or steering to the side of the road, resulting in the death of the pedestrian. What should the autonomous car do?

This modernized version of the trolley problem is what manufacturers of self-driving cars are trying to find an answer to, but without much success. The moral implications of making such a decision are so intricate that philosophy and reality have to be married by both engineering and philosophers such that a decision can be reached in a unanimous way.

Researchers at the Massachusetts Institute of Technology (MIT, 2018) tried to solve this dilemma by performing the largest ethical survey ever, assessing answers of more than two million people from more than 230 countries. The study consisted of thirteen different scenarios of road crossing involving a self-driving car and both people and animals. Given the different situations, the participants were asked to simply indicate what they would do if they were the autonomous vehicle. Some of the results of this thought experiment outlined that people prefer saving a group as opposed to single individuals, or people over pets, or youngsters over elders. However, it is important to note that the outcomes partly depend also on the country of origin. This last find arises a further problem of programming self-driving cars in different parts of the world. For instance, Colombians prefer killing people of lower social status while Finns have no preferences between homeless people and executives (Forbes, 2018). If a self-driving software developer in Colombia has different priorities as opposed to one in Finland, this would result in global chaos and autonomous vehicles will never become an everyday reality. Hence, it is vital to agree, if possible, on a set of rules on how this kind of cars shall be developed.

Today it is recognized that establishing what type of decision and behaviour is morally permitted, prohibited, or obligatory in emergency situations, is a hard philosophical and ethical dilemma. In this paper different guidelines are provided with regards to the way to proceed in the process of programming the car by the automaker. In the situation analysed, the programmer is not acting by instinct because she or he is not directly involved in the event. Furthermore, it can be assumed that the car will not do mistakes, which is not necessarily the case, but it is a needed assumption to proceed with the analysis.The potential consumer who is interested in buying the car will expect it to be safe, even safer than normal cars, and to act in order to protect him or herself, the driver. In other words, the car company cannot produce a car which is willing to endanger the life of the driver or even to sacrifice him in order to save, according to its calculations, other human lives, due to some overall value maximization reasons. Indeed, hardly anybody would buy the car, no matter how small the chance of such an eventuality is.

There are not only economic reasons to support this point of view. The field of ethics provides several (Goodall, 2014) approaches regarding how to face this issue. Firstly, utilitarianism is a form of consequentialism where an action is considered good if it produces the maximum net cumulative benefit or utility (Goodall, 2014). In the collision situation previously described, a utilitarian vehicle would determine that a crash is preferable to the death of a pedestrian, even if it could imply several damages to the driver. Hence, according to this view, the car should stay in its place and absorb the impact. This outcome would be inadmissible to most and presents many critics. The main argument is the issue of incommensurability, which can be read as an epistemic or a conceptual problem (Santoni de Sio, 2017). According to the former, even if there may be reliable objective standards to decide which life is more valuable, individuals involved in these kinds of situations may not have the information or the capacity to make a proper and valid evaluation, and they may not be able to predict the long-term consequences of their actions. Based on the latter, it is impossible to compare the value of different lives. Indeed, since this importance is determined by subjective evaluations dependent by the context, there are no objective standards for making such comparisons.

In order to assigning a value to different people, we must take into account the Universal Declaration of Human Rights (United Nations, 1948). In this milestone document, the Article 2 states that “everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status […]” (United Nations, 1948). Hence, all people are equal and assigning different values to different people would violate this right. Clearly, assigning an economic value to people is problematic. This is the case also for defining a scale of values based on the severity of the injury that the car predicts will be achieved as a result of the accident, from simple distortion to death. As noted by Keeling (2018), “people disagree about whether utilitarianism, contractualism, or another set of principles, correctly describes which harms are morally permitted in cases where harm cannot be avoided”. On the same wavelength, Santoni de Sio believes that this disagreement will prevent us from reaching a definitive solution to the moral design problem of the self-driving cars (Santoni de Sio, 2017). Since every value considerations are to be considered not valid, as we will incur in the dilemma to respect or not to respect the human rights, an approach based on utilitarianism is not to be suggested when programming the self-driving car.

An alternative approach that can be considered is descriptive ethics, the study of individuals’ or groups’ beliefs about morality. Unlike the normative approach, according to this view any system of ethics based on society’s expressed beliefs must be declared as a distribution. In this case, vehicles should be programmed to respond to the distracted driver collision on the basis of probability, according to the range of society’s preferences. In using this approach, the MIT’s survey previously mentioned could be used to describe them. Even if this approach may be easier to support because it reflects society’s expressed beliefs, it may allow behaviours that, while socially accepted, could still be considered morally wrong. In particular, we reiterate once again the dilemma of whether to comply or not to the declaration of human rights.

Thirdly, deontology is a moral theory based on adherence to a set of rules, duties, or rights (Goodall, 2014). According to this approach, in the example under analysis, the automated vehicle should have the duty to take action in accordance with an universal law with which all the subjects involved agree. Thus, the self-driving car has to compare and balance the right to protect itself from the distracted driver and the rule of acting in accordance with, and because of, the categorical imperative. Bauhn argues that it is not clear how to compare them, and therefore the amount of risk on which a passenger in an automated vehicle can be exposed in order to protect another cannot be determined (Goodall, 2014). Even if this threshold is approximately less than the risk to the pedestrian, the precise level has not yet been defined. Because of these, this approach is not able to determine which decision should be made.

Finally, the doctrine of necessity regards emergency cases in which human agents have intentionally caused damages to life and property in order to avoid some other types of losses. According to this approach, “behaviours that are prima facie prohibited by criminal law may be permitted under exceptional circumstances” (Santoni de Sio, 2017). Hence, in this case it can be assumed that the human agent is the software developer, given that the car acts according to how it has been programmed.

There are two different situations in which necessity can be used as a defence: justifications and excuses (Santoni de Sio, 2017). The former occurs when the action, even if prohibited, is made in exceptional circumstances that eliminate its wrongness, while the latter occurs when the wrong action was done under conditions that eliminate the culpability, as in cases of non-culpable ignorance of relevant circumstances. On the other hand, if necessity was only an excuse based on the weakness of human will and motivation, then it could not be applied to any programmed behaviour of autonomous vehicles. Thus, the example taken into account can be seen as a justification case where the automated vehicle moves from its position in order to defend the driver, avoiding an impact which might cause relevant damages.

Allowing harm is not the same as doing harm. Indeed, if there was no difference, “there should be no objection to bombing innocent civilians where doing so will minimize the overall number of deaths in war” (Woollard, 2016). While consequentialism doesn’t agree with this, all the anti-consequentialists substantially agree (Woollard, 2016). Since a utilitarian view is hardly applicable here, as previously shown, allowing harm is preferable to doing it. Hence, deviating from the trajectory and not saving the pedestrian is morally justifiable.

Taking everything into due consideration, we believe that the car software developer should program the self-driving car in order to preserve the life of his or her driver, regardless the particularities of the event. This conclusion is firstly supported by the aforementioned economic reasons. Ethically speaking, we have shown that it is not possible to define a way to act on the bases of utilitarian or deontological argumentations, especially considering the variety of possible cases and situations. Then, we have shown that deviating from a straight trajectory, in order to avoid an impact and potential damages to the driver, is a morally acceptable solution with respect to the doctrine of necessity, as supported by the conclusions of Santoni de Sio (2017). Final considerations on the standing difference between allowing harm against doing harm have been presented as a support of our conclusive view.

In ‘Ethical Theory and Moral Practice’ published by Keeling in (2018), we have identified a particular set of situations in which where it is allowed not to follow the previous guidelines. Those situations are the ones in which a Restricted Pareto Principle (RPP) can be applied. The RPP is applicable in situations in which “there exist no alternative allocation of harm in which all affected parties are at least as well-off, and some affected party is strictly better off” (Keeling, 2018). However, such a situation is highly unlikely to be the case and extreme caution should be taken, as difficult and potentially unreliable and insubstantial value considerations are required in order to define situations in which an actor is at least ‘well-off’.

While our conclusion is in contrast with the aforementioned results provided by the survey of the MIT (2018), we believe it is the most reasonable view to pursue. With regards to this study, we showed how its results outcomes partly depend on the country of origin and might derive from misleading questions. In particular, we believe that people might change their idea as the case and their position in the case changes, thus invalidating the results of the survey.

Bibliography

  1. Forbes. (2018). Retrieved from https://www.forbes.com/sites/noelsharkey/2018/11/08/should-a-self-driving-car-kill-its-passengers/#42268e6612ec.
  2. Keeling, G. (2018). Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions. Ethical Theory and Moral Practice.
  3. MIT. (2018). Retrieved from http://news.mit.edu/2018/how-autonomous-vehicles-programmed-1024
  4. Santoni de Sio, F. (2017). Killing by Autonomous Vehicles and the Legal Doctrine of Necessity.
  5. United Nations. (1948). Retrieved from http://www.un.org/en/universal-declaration-human-rights/
  6. Woollard, F. (2016). Doing vs. Allowing Harm.
  7. Goodall, Noah J. (2014). Vehicle Automation and the Duty to Act

Cite this page

Whom Should a Self-driving Car Protect in a Car Accident. (2021, Nov 29). Retrieved from

https://graduateway.com/whom-should-a-self-driving-car-protect-in-a-car-accident/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront