Should machines augment human autonomy, rather than eliminate it?
The British Army has begun testing a new generation of emerging “Autonomous Warriors” (AKA killer robots) on the fields of Salisbury Plain in the south of England. It is billed as the biggest military robot exercise in UK history, and includes a battle between “autonomous warriors” (also called LAWS – lethal autonomous weapon systems) in a tough simulated operational environment to test “long-range and precision targeting” and to “bring more guns to the fight without having to use more men”.
The exercise known as “Autonomous Warrior Exercise” (AWE) follows on from a previous “Unmanned Warrior” initiative and involves over 70 robotic systems and drones. Based on the government briefing and news reports the hyperbole of the title “Autonomous Warrior” appears somewhat misleading, since many of the robots appear to be remote controlled rather than truly autonomous in deciding who or what to kill. Indeed the footage below shows robots helping and assisting rather than killing. As the initiative has its own Twitter feed, perhaps AWE is as much about PR to showcase military prowess and securing funding for military AI as anything else.
For most people, Lethal Autonomous Weapon Systems (LAWS) may seem (thankfully) rather disconnected to their everyday experience of digital wellbeing.
But there is a link, even if there are no killer robots or drones out to hunt you down right now (but see the video below for more). The most influential model in human wellbeing – self-determination theory – suggests that human autonomy is fundamental to wellbeing. The more technology undermines human autonomy, the worse the predicted impact on wellbeing.
So whilst you and your personal autonomy might not be at risk from being terminated by an autonomous warrior anytime soon, every time you cede autonomy to technology you may be giving up the very thing that makes your happy and human. This has implications for automation beyond LAWS (and autonomous vehicles).
Beyond our research into AI ethics, and beyond military tech, do we all in technology have a responsibility to develop solutions that enhance rather than eliminate human autonomy?
And if you are concerned about the use of lethal autonomous systems, check out autonomousweapons.org