Human Intuition or Ethics Should Shape Moral Codes of Self Driving Cars

Table of Content

Eventually – though not yet – self-driving cars will be better drivers than us. When the time comes, a machine behind a wheel will be able to drive more safely than you can ever drive. They will not drink, not get distracted, and will have better judgment thanks to their networking capabilities with other vehicles. Soon after, the gap between self-driving cars and humans will be so great, we will start discussing whether it should be legal for humans to drive. The fierce debates that we have now regarding automated cars, will shift their attention to human drivers; even if driving stays legal, you will be asked whether it is moral of you to drive, because the possibility of you leading to an accident will be far greater if you drive yourself instead of giving the controls to a machine. However, despite all the advantages of self-driving cars, they will face dilemmas humans never had to deal with?

Us humans never have to think about the best course of action in an accident scenario, simply because seconds away from a life threating accident, humans do not have the enough time to analyze potential actions, we rely on our reflexes and perhaps a bit of luck, to reach the right decision. Besides, human drivers could be forgiven for making bad decisions in split second and under immense pressure. But manufactures of automated cars do not have the same luxury; they do have the time to get it right and therefore bear more responsibility for bad outcomes. Even with all the time and resources, finding the right thing to do might be obscure and challenging. The decision between what is right or wrong, rely upon the ethical framework one uses to approach the problem. But, humans are intrinsically different, and their value systems are not organized in the same order, as a result, people often use different ethical frameworks to approach situations. If different ethical frameworks result in different course of action, the machines we design will inevitably be stuck within ethical dilemmas. In such cases then, could we rely on ethical judgment of machines alone, or will there be still a need for human intuition behind the wheel?

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

Trolley Problem

A good example that reveals the complexity of determining a moral code that will dictate the right course of action for self-driving cars, is the famous Trolley Problem. This thought experiment discusses a train that is about to hit 5 people standing on the tracks. You are watching the incident, and next to you is a lever that could switch the direction of the train. If you pull the lever, direction of the train will switch to side tracks and you will save the lives of five people. Dilemma is, though, there is also a person standing on the left rail. Would you pull the lever to kill the single person on the left track to save the lives of other five? The right course of action in this scenario is by no means clear because there is no clear right way to go, right decision depends on the ethical approach.

One approach, utilitarian ethics argues ‘whatever action can save the most lives or generate the greatest common good should be selected even if the actions to achieve it will harm a smaller number of people’. From utilitarian perspective, one can simply look at the numbers, and argue it is better for five people to live than just one, thus morally right act that would result in the least amount of harm is to pull the lever. On the other hand, deontology ethics proposes that morality of an action should only be evaluated based on whether the action itself is right or wrong, disregarding the consequences of the action. In this approach, since the concern is the action itself, it focuses on the moral distinction between killing and letting someone die. Swerving the car is a direct act to kill a person; had we done nothing though, we would be simply letting them die. It is a minor but significant distinction; it seems lot worse to kill someone rather than allow someone to die – especially if you have no responsibility for the initiation – because of the way human intuition works. Therefore, a deontologist would reject pulling the lever arguing it is morally wrong to perform any action that would intentionally harm someone.

To see how this physiological study could be adapted to autonomous vehicles consider the following scenario. Suppose, a car is fast moving in an empty road, while suddenly one of the cars from upcoming traffic loses the control and drifts to your lane. There is not enough time to halt your car, the crash is inevitable. Your car can either go straight hitting the car along with its four passengers or swerve to left to crash into the motorcycle in the side lane. In situations like this, self-driving cars will need to make a moral assessment: killing all four passengers by doing nothing or killing the motorcyclist to save the lives of four. Another way to look at this problem is arguing multiple injuries are better than a single death. By designing crash optimization algorithms that would deliberately and systematically target larger vehicles as they would absorbs collisions easier, it could be possible to trade lives with injuries. [3] Then, this ethical approach would choose crashing into upcoming car, instead of the motorcyclist who is likely to die in an accident. But think about the future implications of this approach, algorithms would intentionally target motorcyclists with helmets over those without. Would it be ethical to penalize someone just because they have extra protection or simply because they have larger, thus safer cars? As said previously, we expect self-driving cars to come up with perfect answers in all situation, but how should they act if none of the actions are perfect – simply because all actions are equally right? This will be the vital question that engineers will have to answer in the near future.

Would You Kill the Fat Man

In no-win situations like the trolley problem, the moral act depends on the ethical framework one uses. However, more accurate answer to the dilemmas above requires a broader understanding of factors that contribute to moral decision-making process. In the Trolley problem above, for the sake of simplicity the potential solution approaches were categorized under two major thought of schools. But, positioning utilitarian and deontological ethics as polar opposites within a spectrum where people have to choose one or the other – or somewhere in between – have a major shortcoming. It assumes that ethics are the only factor that shapes the moral code of humans, which in fact is a limited understanding and thus proves to be inaccurate in identifying appropriate moral acts in real life situations.

If only ethical frameworks were shaping moral codes, people would always adhere to the same moral beliefs and act accordingly in all situations. But people often approach similar situations with different views for the right action, under slightly different circumstances. Consider a variation of the Trolley Problem, called the Fat Man Problem. Imagine that you are once again watching the same train approaching those five people standing on the rails, unaware of the approaching train. But this time, you are not near the switch lever, so it is not possible to redirect the train. Instead, you notice that there is a really fat man near the rails, and realized that if you push the man to the rails, the impact of the collision might derail the train. Even if you know by doing so, you can save five people, would you kill the fat man? Things get interesting and yet complicated at this point, because most people that stated they would pull the lever to kill the alone guy in the previous Trolley Experiment, this time said that they would not kill the fat man. In numerical terms, however, both scenarios are identical; you have the same option of saving five by forgoing a single life. Why then the sudden mind change? The same people that agreed on utilitarian approach in the Trolley experiment to prioritize the outcome, when it comes to Fat Man experiment, suddenly become more concerned with the action itself. When asked upon the sudden change, people find it hard to give a logical explanation. [] This inexplicable mind change reveals that there are other factors as well which shapes humans’ moral codes. In this scenario, the thought of assaulting an innocent man with the direct intention to kill him was too much for most people.

A comparison of two thought experiments above in terms of the required moral act, showed that there is a moral distinction between intending someone’s death and simply foreseeing it. In the Trolley experiment, people do not intend the person on the left rail to die, rather it is hoped that he would find a way to escape before the train runs over him. But in the Fat Man scenario, people have the direct intention of killing the fat man; in fact, they need the train to hit the man, for those five people live. For human intuition intending the death of someone seems far worse than just foreseeing it. It is a minor distinction but significant enough that our human intuition had forced us to forgo the utilitarian approach that we used in the first scenario and choose deontologist approach instead.

In the light of this new finding then, a more thoughtful response on moral codes should also consider the moral distinctions; and human intuition is surprisingly adept at differentiating them. Therefore, human intuition and ethical frameworks should work hand in hand, as former complements the latter. In this regard, as engineers program a moral code into self-driving cars, they should consider human intuition alongside the ethical guidelines. Even when different ethical frameworks make different acts sounds equally right or plausible, people might have preference as to which ethical approach is more suitable to the given circumstance. In situations where ethical frameworks contradict within each other, human intuition could be machines’ moral guide. Just because machines have superior intellect, does not mean we should completely get rid of aspects of human mind in the driver seat.

Ethics, Law and Need for Human Intuition

Another area that requires further consideration of human intuition is the vast field of discrepancies between ethics and law. In an ideal word, ethics and law should be aligned, but in real world ethics and law often diverge; in some cases, good judgment could even compel us to act illegally. Jaywalking is considered illegal, but it would be a far-fetched claim to consider it an immoral act. What about a health emergency? Human driver might intuitively decide to go faster than the posted speed limit if it is possible to save a life by going faster. But, automated cars which are designed to follow rules and regulations, might refuse to follow the same intuition or reasoning of humans to drive faster. Would it still be moral to follow the law even if it means losing a life? Or wouldn’t an emergency situation be a legitimate reason to stretch the law?

Similarly, if a tree branch falls on the road, human drivers can use judgment and drift to the left lane to drive around it. Automated cars, though, are programmed to obey the law and prevent crashes, so they cannot cross the double yellow line and have to come to a full stop when face an obstacle. Not only this would create more traffic jams, but if the immediate stop is unexpected, following human driver might be caught off guard and crash. For any human driver, after getting so used to simply driving around minor obstacles on roads, it might be unexpected to face a situation where car in front would come to a complete stop. The discrepancy between human thinking and the way automated cars are programmed to think will be a challenge that needs further consideration.

Currently, there is no established legal framework for autonomous cars, and this could be a good thing to proactively acknowledge discrepancies between ethics and law, and consider ethics from a perspective of moral sense as we design legal procedures. Programming a car to dedicatedly follow the law would be dangerous and foolishly inefficient. Therefore, besides ethical guideless in automated cars that might draw the boundaries, we will also need human intuition to decide when it is morally right to stretch the law.

Secondly, since at least for a foreseen future, humans will share roads with automated cars, machines and humans should share a common moral sense, or at least some sort of human intuition to ensure common understanding between humans and cars. If a car lacks the basic intuition of humans, this would only pose more challenges and complex driving environment to human drivers. While the prevalent thought among car manufacturers is that human mind is vulnerable to distractions and intrinsically poor analyzer of the road situations ahead, striving to completely eliminate the traces of human behind the wheel might create its own problems. Human reasoning and intuition and their role in shaping the moral code of people, might be a crucial part of any driving experience. Specifically, intuition and reasoning are important to decide when is it moral to break law and when not to, things that could not be differentiated by ethical frameworks alone. An important step in finding common understanding between machines and humans, is to program the automated cars as close as possible to a human mindset, which requires imitating a sort of human intuition for the self-driving cars. At the end, automated cars are designed to work with and for humans, and for the successful co-existence of machines and humans, it is crucial that computers take into consideration the human perspective while taking decisions, not just the ethical frameworks.

Conclusion

Simply adding human intuition or sort of reasoning, to the codes of automated cars will not solve all the problems we have, but it might be the first step that opens the path to a wider discussion for the moral acts 1when automated cars face no-win situations. If anyone thinks the whole point of automated cars is to eliminate this kind of no-win accidents, then they are missing the point. Automation technology might be powerful and superior to humans, but incidents will inevitably happen due to reasons beyond control. A sudden tree branch falling in front of a car, however quick the car to notice it and react, would leave limited options for the car. Too fast to slow, it might hit the branch, too fast to slow and it might imperil the lives of the passengers.

Drifting to side lanes, there might be other cars. Also, while it is widely discussed in mass media as if automated cars will suddenly replace human drivers, in reality it will take years, if ever, before all people forgo the task of driving to machines. But in the meanwhile, human drivers will have to operate together with machines that they do not understand or share common intuition; all this discrepancy will exacerbate the likelihood accidents. We do not like thinking about uncomfortable and difficult choices involving ethical dilemmas, because there is no right answer. But, eventually programmers will have to make a choice and when they do it they should consider that ethics by numbers alone seems naïve and incomplete; but so does the ethical approach that value only the acts itself. If we want our ethical guidelines to be suitable for the humans involved and be more acceptable responses to no-win situations, engineers should incorporate human intuition into moral codes. Human intuition is not only providing a mean to find a balance between two opposite ethical approaches, but also contributes to overall success and implementation within our human lives through offering a potential solution by offering a middle way to discrepancy of law and moral codes. As the technology underlying completely self-driving cars are fast approaching, keeping some sort of human mind/intuition – within the driver seat might be the solution we need.

Cite this page

Human Intuition or Ethics Should Shape Moral Codes of Self Driving Cars. (2022, Feb 07). Retrieved from

https://graduateway.com/human-intuition-or-ethics-should-shape-moral-codes-of-self-driving-cars/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront