The Ethics if Preprogrammed Car Accidents

Table of Content

Problem Statement

Self-driving cars promise a much safer and more comfortable traffic environment, yet in the process they raise a lot of questions and dilemmas. As long as there are still unpredictable factors like non-autonomous vehicles, pedestrians in traffic or mechanical/electrical failures, self-driving cars will be vulnerable to crashes or other incidents. In some cases, scenarios will be formed in such a way that accidents are (almost) unavoidable. This raises some important ethical questions. For instance, should self-driving cars be programmed to minimize fatal accidents or injuries? Or should they protect their driver at all cost? In short, what ethics should be pursued in the decision-making among self-driving cars.

Problem Analysis

Many parties are involved in the decision-making process around self-driving cars, and especially their crash management. Various interests in self-driving cars exist, and each party has their own view on what these cars should look like to be in line with their own ideal, or to produce the most profit to each particular organization.

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

The first group, and simultaneously the one with the largest interest, is the industry itself. Manufacturers want their car to sell, and could take on various strategic positions in the debate. One could argue that a car should always take care of it’s driver, in terms of safety. At first, the car would sell really well based on this principle. After all, who would buy a car that doesn’t prioritize your safety. However, it would mean the car does everything to keep you safe, not regarding bystanders or other traffic. This could potentially put manufacturers in a bad daylight, as they would cause casualties among innocent people, as a service to their drivers.

On the other side are the users, or buyers. They face problems somewhat similar to those among manufacturers. Their own safety is something they value most, but they don’t want to be responsible for the casualties this choice would potentially cause. Also, it is hard from a driver’s point of view to make a decision on particular crash algorithms. You bring a vehicle to the road, and you have to decide which people to injure or kill in the event of a crash. Guilt would most certainly feel to be with the driver in such cases.

Buyers can also be rental companies, who buy the cars, and rent them to various users. Their case is similar to the industry’s. They want their cars to be used, thus want to satisfy customers, and be able to guarantee their safety.

Another party is the government. Their role in legislation makes them an important and influential vote. As they have created traffic laws as they are, they will have to accumulate with the future traffic situation, of which self-driving cars will be an important part. In general, their goal will be to protect society as a whole, and minimize the amount of harm done in total. The question here lies with how to weigh of certain casualties, and how to determine whether one death is better than ten major injuries. The general law of traffic explains that the main goal is to not obstruct or endanger fellow road-users. In this spirit, the car should sacrifice the driver first, as his car is the reason for collision.

Options for Action

There are various options to choose from regarding crash algorithms. The choice was made to discuss a shortlist of four potential outcomes, and evaluate those in terms of ethics.

The first option is to minimize the harm done in case of a crash. The system would analyze various maneuvers to determine the injuries or even deaths it would cause. In any case, it would first go for the least deaths, and consecutively for the least injuries. Looking objectively and from a society’s perspective, this would mean the least impact on society as a whole.

However, it would not consider any additional factors and would run over a child rather than a gang of criminals.

Another strategy might be to prioritize the passenger’s safety, the car would neglect all factors outside of the car. It would choose the safest way to go in order to save the driver. This could mean braking on the spot, and taking impact from the back, or steering to the side of the road, crossing the walkway. In this case this could mean an even bigger threat to, for example, the driver in the car behind the subject, or some pedestrians on the walkway.

Some parties might argue the following option; When your car is about to crash and there is a possibility to only harm the owner of the car without harming other people or other living beings that would be a viable option as it is not the other person’s problem that another car is crashing.

A last option would be to keep crashes random, imitating human reflexes, and thus the regular crash behaviour. This crash algorithm would avoid the moral dilemma by not having a programmed set of rules on which to operate. The car would mimic the seeming randomness that human drivers experience mid-crash.

There are also some rules that might have to be set to overrule the crash algorithms, or taken into account in any situation. For example, the ones causing an accident should always be the ones that are punished. If someone breaks the law by crossing the road illegally, their safety should have the lowest priority. Also, human life should be valued more than animal life. One could even say that young people are more valuable than elderly.

Ethical Evaluation

Intuition

We intuitively believe it would be best to try and minimize harm while taking the adherence to the traffic laws of the involved people into consideration. To us, it seems most beneficial to get the number of fatal traffic accidents as low as possible. As mentioned before, this is in the objective perspective of society as a whole the best solution, as it has the least impact.

Someone breaking the traffic rules, however, should be penalized compared to others who do follow the rules. The minimal severity of the said offence is a point of discussion. In other words, how severe does a traffic offence have to be, for you to get penalized?

As a second option, we find that prioritizing outsider safety has our preference. Since the owner of the car is actively involved in the decision of using a self-driving car, while outsiders might not, it seems fairer to us to impose the risks of the said driving car onto the passengers.

As the third option, we believe that the car should save the passengers. The passengers bought a product and therefore that product should do what serves the user best. While seemingly selfish, this case does make people more eager to buy a self-driving car and therefore increases self-driving car adoption, and as a result of this adaptation, car crashes decrease.

We intuitively think randomizing of the car’s action to mimic human behaviour is the worst option. This scenario does avoid the ethical questions raised by having determined crash algorithms, but the recklessness of this approach is morally inappropriate, since we feel that new technologies that can possibly save lives should be embraced, instead of disregarded because they raise certain ethical dilemmas

Utilitarianism

Utilitarianism is an ethical theory that strives to maximize overall happiness. (White, 2015)

If we apply this ethical theory to the self-driving car crash, we would desire a crash algorithm that considers the different crash options it has and then makes a calculated decision on which one would cause the least sadness. Since all actions have consequences, and sometimes very unforeseen ones, it is important that the crash algorithm can take in as much information as (ethically) possible. This way as broad as possible image of the scenario can be formed on which the decision will be based.

We therefore first investigate which information is actually available to the crash algorithm.

The first layer, non-personal observation, is defined as information that can be observed directly by the car’s sensors, with personal traits of the people involved excluded. Examples would be the overall harm that is done (in other words, the presumed fatality/injury of the different scenarios) or the adherence to traffic rules by the different people involved.

The second layer, personal observation, is defined as exclusively personal information that can be observed directly by the car’s sensors. Examples would be the approximate age of the people involved or the sex of said people.

The third layer, personal database information, is defined as information that could be acquired from a database if the people involved are identified. One could, for example, think of China’s social credit system currently in development. (Persson, 2015) Such a system could inform the crash algorithm with an immense amount of personal data, for example, the wealth, profession, social circle, or even ethical preference of the people involved.

If we desire the happiest outcome of a certain crash scenario, it would be wise to consider all data possible, since this would lead to a more calculated decision. This is the way an act utilitarian would generally handle; act utilitarianism searches for the best possible result within a certain act. (Walter, 2011)

Rule utilitarianism, however, says that ‘the rightness or wrongness of a particular action is a function of the correctness of the rule of which it is an instance’. (Garner, 1967) In other words, not individual actions, but the rules in place must lead to the greatest good. We, therefore, find a counter-argument to the act utilitarian belief of using as much data as possible, since the personal observation and personal database information layers discriminate people involved. Implementing rules that discriminate go against the rule utilitarian belief that the rule should lead to the greatest good.

Interestingly, Bentham and Mill would seem to be divided on the topic. While Mill is a rule utilitarian, he does distinguish between qualities of pleasure (Mill, 1861). The extra information layers 2 and 3 offer could offer a better insight into the quality of the pleasure the involved people enjoy, but the discriminatory nature of these layers would contradict the rule utilitarianism. While Bentham is an act utilitarian who would not distinguish between pleasures (even though layer 2 and 3 offer him to do so).

If we compare the downsides of both utilitarian sub-theories, we see that act utilitarianism would lead to discriminatory rules being set in place, while rule utilitarianism would lead to slightly less informed, and therefore generally a less happy outcome. When considering that self-driving car crashes would be rare, the downside of the rule utilitarian approach shrinks, while it becomes questionable if having discriminatory rules in place for these rare occasions is a good trade-off. Furthermore, while the extra information of layers 2 and 3 can lead to a better decision at the moment, there is still a lot of uncertainty due to the immense amount of butterfly effect consequences there are.

We, therefore, find rule utilitarianism to be a more fitting ethical theory compared to act utilitarianism when considering car crashes. The utilitarian goal would in our opinion be best served if the rules set in place do not discriminate, while still trying to maximize happiness as much as possible.

Kantian Theory

Kantian ethics is a deontology based ethical theory based on moral universal rules or maxims as stated by the German philosopher Immanuel Kant. According to Kantian theory, all decisions should be made according to a certain set of principles or maxims on which everyone agrees and people should only act on those maxims, but in those maxims, it is not acceptable to use a person as an instrument (Reference 5).

An example of this is an autonomous car driving at a pedestrian crossing or a concrete barrier with brake failure and there is no way somebody doesn’t get hurt, either the pedestrians at the crossing or the driver gets hurt. According to Kantian ethics, a rule has to be made on where to crash that satisfies everyone and doesn’t use people as an object or mere means.

  • Case 1-  If the car crashes into the concrete barrier block the driver might be injured or die.
  • Case 2 – The car crashes into the pedestrians and they might be injured or die.

Both cases contradict with Kantian ethics as no matter what, there will always be a possibility that someone gets hurt or injured as Kant stated that killing or injuring a person intentionally is morally wrong. This makes using Kantian ethics on a situation like this incredibly difficult and would only work if there was an option that didn’t use people as an object or as a mere means, such as damaging the car or another object without inflicting harm to a person.

Isaac Asimov wrote in his novel “Runaround” (Reference 6) the three laws of robots which are essentially universal rules for robots and a way to make sure that robots act in a way that would be safe for humans.

The upside of this ethical theory is that whatever rule is set is the same rule for everyone without exception. This makes sure everyone is treated equally and nobody receives special privileges. Every time this problem occurs the same action is taken and nobody is given an unjustified advantage. Also, Kantian ethics never allows to kill a person and thus would always choose to destroy an object or even harm a person if nothing else is possible.

The downside is that it will be very hard to create this universal law that takes all the goals and values of everyone that might be involved in this car crash. Killing a person always violates the values and goals of a person and when killing a person to save more lives (let’s say 4 people are in the car) uses a single person (to save the 4 people) as a mere means and thus also violates the second formulation as it might not be the person’s intention to die when walking across the pedestrian crossing. Furthermore, all rules that an autonomous car should follow has to be programmed by a human programmer. A programmer can never anticipate every possible situation that will ever occur. Artificial intelligence might become better at anticipating and avoiding these situations but crashes will always happen in one way or the other.

To conclude, Kantian ethics has a lot of strong points that are so good that it is (almost) impossible to achieve. Kantian ethics could be used as a general guideline, but a system solely based on this theory is not possible.

Reflection

A point of criticism was the clear distinction between various groups of interest. In this case, the difference between users and drivers was unclear, and lead to the various

Looking at utilitarianism as well as Kantian theory, we can see that in both cases we conclude to minimize harm without discrimination. While the ethical reasoning behind them might be slightly different, the outcome of both reasonings is the same. Both ethical theories strive to avoid death first and foremost. If we compare this outcome to the intuitive one, we see that we here also agree that saving lives is most important. The ethical theories, however, do not account for people breaking traffic rules in a direct way.

If we raise the ethical dilemma of one person recklessly jumping onto the street, resulting in his death or injury of dozens of people, we consider it unethical to harm all those people. The Kantian theory would not allow for killing a person, and therefore rule utilitarianism would be most fitting to our intuition. Rule utilitarianism is then, therefore, our preferred choice.

References

  1. White, Stuart (2015) Zalta, Edward N. (ed.), ‘Social Minimum’, The Stanford Encyclopedia of Philosophy (Winter 2015 ed.), Metaphysics Research Lab, Stanford University, retrieved 3 October 2018
  2. Michael Persson, Marije Vlaskamp & Fokke Obbema (2015) ‘China rates its own citizens – including online behaviour’ Retrieved from: https://www.volkskrant.nl/nieuws-achtergrond/china-rates-its-own-citizens-including-online-behaviour~b4c0ae0e/
  3. Mill, (1861)Utilitarianism
  4. Garner, Richard T.; Bernard Rosen (1967). Moral Philosophy: A Systematic Introduction to Normative Ethics and Meta-ethics. New York: Macmillan. p. 70. ISBN 0-02-340580-5.
  5. Jeffrey K. Gurney (2016) Crashing into the Unknown: An Examination of Crash-Optimization Algorithms Through the Two Lanes of Ethics and Law University of South Carolina School of Law
  6. Asimov, I Asimov, I. (1950). I, robot. Garden City, N.Y.: Doubleday.

Cite this page

The Ethics if Preprogrammed Car Accidents. (2021, Nov 29). Retrieved from

https://graduateway.com/the-ethics-if-preprogrammed-car-accidents/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront