​Is Autonomous Technology a Threat to Humanity?

January 16, 2017

By BP63Vincent (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

To Err is Human . . .  

Up until a few years ago there was never any question as to who (or what) was responsible for the operation of a vehicle - it was the driver. 100%. The driver had complete control of the steering wheel and the brakes and everything in between.

Someone backs into his neighbor’s car? Driver’s responsibility.

Someone rear ends another vehicle on her way home from work? Driver’s responsibility.

Someone leaves his lane on the highway and crashes into another car? Driver’s responsibility.

In 2015 alone, the U.S. National Highway Traffic and Safety Administration (NHTSA) reported 35,092 deaths as the direct result of car accidents - a 7.2% increase over deaths in 2014. What’s more, according to Mark Rosekind from the NHTSA, an estimated 94% of these fatalities were directly attributable to “human error”.

Auto manufacturers are constantly developing new and more effective features to improve the safety of vehicles, reduce the rate of car accidents resulting from human error, and obliterate car-related casualties. 

Vehicle safety and driver assistance technologies run the full gamut from the most basic (e.g. seat belts, airbags, head restraints, LATCH, anti-lock brakes, and tire pressure monitoring systems) to those either semi- or fully controlled by the car’s computer (e.g. adaptive headlights, active cruise control, automatic emergency braking, forward collision avoidance, blind-spot monitoring, lane departure warning, lane keep assistance, and parking assist).

The more advanced safety technologies - those that provide the driver with essential information and automate difficult or repetitive tasks, with the goal of increasing car safety for everyone on the road -  are known as advanced driver assistance systems (ADAS). ADAS technologies depend on electronics and firmware elements and are governed by international functional safety standards like IEC-61508 and ISO-26262.

Some Background: Autonomous Driving Levels 0 to 5 

To help clarify the situation, the National Highway Traffic Safety Administration (NHTSA) defines six different levels of vehicle autonomy:

Level 0: (No Automation) The most basic - the human driver controls it all: steering, brakes, throttle, power, etc.

Level 1: (Driver Assistance) Most functions are still controlled by the driver, but a specific function (like steering or accelerating) can be done automatically by the vehicle.

Level 2: (Partial Automation) The computer in the vehicle controls most of the driving experience - steering, keeping lanes, and changing lanes safely with only prompts (indicator stalk, cruise control buttons) from the driver. The human driver needs to maintain hands on the steering wheel and be fully alert, paying attention, and ready to take control of the vehicle.

Level 3: (Conditional Automation) The computer in the car is in control and can monitor the environment, but a human presence is required to take control if needed/prompted (by the computer). 

Level 4: (High Automation) The car is in total control, even in an emergency, and guides itself via a navigation system and sensors. At this level, the autonomous functionality still needs to be activated by a human at the start of a trip. The car may ask for human input, such as which route to take to a destination, but does not require it (meaning, if the driver doesn’t give input the car will makes its own decision).

Level 5: (Full Automation) All computer, all the time. The car doesn’t even require a driver to be present in the car.

The Human Control Factor

The logic seems to be that if “to err is human”, and errors are unacceptable, then removing the human should remove the error. 

But, is removing the human factor  - and giving control to the car - really what’s happening? 

Ready to have your bubble burst? The ADAS features didn’t program themselves. Who do you think programmed the car’s computers and designed the electronics and firmware elements behind the ADAS technologies? 

Instead of controlling the car yourself, you’re entrusting other humans - far removed from your particular situation and context - to control your car for you. 

Instead of being operated by the driver’s intentions, the car is now being operated by the intentions of those who created the vehicle’s computers and by those sitting in positions of power, making the rules that govern the development of the programs that determine how the autonomous features make decisions.

The decisions and actions that the vehicle makes will be the result of centralized programming and decision-making processes essentially designed to limit and remove the discretionary power of the human operators.

What’s the big deal, you ask? Maybe you don’t mind having the responsibility of decision-making removed from your plate.

Can’t-Blame-Me Syndrome

Nothing is ever as simple as it seems.

The ability to make decisions is an integral part of being human - as is the ability to be answerable/accountable for the consequences of these decisions. This state of accountability for one’s actions is also known as responsibility. 

Responsibility works to organize social relations between people and between people and institutions by setting expectations for the fulfillment of certain obligations and duties. 

Generally, people are considered responsible for outcomes that are the result of their voluntary actions.

Are you starting to see where we’re going with this?

If a vehicle is “fully autonomous” (Level 5), the driver has no control over decision-making and, thus, no control over the actions of the vehicle.

If the driver doesn’t have discretionary power over the decisions that guide or control the vehicle, the driver can not then be held responsible for the actions of that vehicle.

If the driver can’t be held responsible - who can be? The auto manufacturer? The person who installed the components? The company who manufactured the components? The team who programmed the firmware? The people who determined and passed the laws that dictated how the technology was to operate?

The point here isn’t so much that automation makes it difficult to attribute accountability (or liability) - which it does - but more that automation can affect human action to the degree that the very essence of being human is compromised.

The more that decision-making and control are centralized and removed from the individual, the more we lose  that culture of responsibility and accountability that allows us to form connections with our fellow humans.

What does that mean for the future of humanity? 

Subscribe to Tasca Part’s Email List

Get special subscriber coupons and the latest news delivered straight to your inbox. 

Sign up now!