Aug 2, 2022
Should war robots have “license to kill?”
Posted by Raphael Ramos in categories: drones, ethics, robotics/AI
War is changing. As drones replace snipers, we must consider the ethics of autonomous weapons making life or death decisions.
War is changing. As drones replace snipers, we must consider the ethics of autonomous weapons making life or death decisions.
By Natasha Vita-More.
Has the technological singularity in 2019 changed since the late 1990s?
As a theoretical concept it has become more recognized. As a potential threat, it is significantly written about and talked about. Because the field of narrow AI is growing and machine learning has found a place in academics and entrepreneurs are investing in the growth of AI, tech leaders have come to the table and voiced their concerns, especially Bill Gates, Elon Musk, and the late Stephen Hawking. The concept of existential risk has taken a central position within the discussions about AI and machine ethicists are prepping their arguments toward a consensus that near-future robots will force us to rethink the exponential advances in the fields of robotics and computer science. Here it is crucial for those leaders in philosophy and ethics to address the concept of what an ethical machine means and the true goal of machine ethics.
Can the sum of knowledge and experience we’ve accumulated over a lifetime live on after we die? The concept of “mind-uploading” is a modern version of an age-old human dream. Transhumanism hopes to not only enhance human capacities but even transcend human limitations such as bodily death.
The main character of Oscar Wilde’s famous novel The Picture of Dorian Gray wishes for eternal youth. And his wish is fulfilled: Dorian Gray remains young and exquisitely beautiful, whereas his portrait grows old, bearing the burden of aging, human shortcomings and imperfections. As we know, the story ended badly for Dorian.
In our time, scientific discoveries and new technologies promise to bring us closer to his dream. And no deal with the Devil is needed for doing so: once we understand how to manipulate the building blocks of life as well as the material foundations of our consciousness, emotions and character traits, so the story goes, we will be able to broaden human nature and overcome its inherent limitations such as aging, suffering and cognitive, emotional and moral shortcomings.
There may be some very compelling tools and platforms that promise fair and balanced AI, but tools and platforms alone won’t deliver ethical AI solutions, says Reid Blackman, who provides avenues to overcome thorny AI ethics issues in his upcoming book, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent and Respectful AI (Harvard Business Review Press). He provides ethics advice to developers working with AI because, in his own words, “tools are efficiently and effectively wielded when their users are equipped with the requisite knowledge, concepts, and training.” To that end, Blackman provides some of the insights development and IT teams need to have to deliver ethical AI.
Don’t worry about dredging up your Philosophy 101 class notes
Considering prevailing ethical and moral theories and applying them to AI work “is a terrible way to build ethically sound AI,” Blackman says. Instead, work collaboratively with teams on practical approaches. “What matters for the case at hand is what [your team members] think is an ethical risk that needs to be mitigated and then you can get to work collaboratively identifying and executing on risk-mitigation strategies.”
Axon has paused work on a project to build drones equipped with its Tasers. A majority of its artificial intelligence ethics board quit after the plan was announced last week.
Nine of the 12 members said in a resignation letter that, just a few weeks ago, the board voted 8–4 to recommend that Axon shouldn’t move forward with a pilot study for a Taser-equipped drone concept. “In that limited conception, the Taser-equipped drone was to be used only in situations in which it might avoid a police officer using a firearm, thereby potentially saving a life,” the nine board members wrote. They noted Axon might decline to follow that recommendation and were working on a report regarding measures the company should have in place were it to move forward.
The nine individuals said they were blindsided by an announcement from the company last Thursday — nine days after 19 elementary school students and two teachers were killed in a mass shooting in Uvalde, Texas — about starting development of such a drone. It had an aim of “incapacitating an active shooter in less than 60 seconds.” Axon said it “asked the board to re-engage and consider issuing further guidance and feedback on this capability.”
At DeepMind, we’re embarking on one of the greatest adventures in scientific history. Our mission is to solve intelligence, to advance science and benefit humanity.
To make this possible, we bring together scientists, designers, engineers, ethicists, and more, to research and build safe artificial intelligence systems that can help transform society for the better.
Adriano V. Autino presents his book “A greater world is possible!”
Mon, May 16 at 2 PM CDT.
Interested.
Machine intelligence and artificial intelligence. How it may impact the future of humanity — A discussion with award winning science fiction author Robert J. Sawyer.
The exponential growth in computing powers, machine intelligence and artificial intelligence suggests that within a few decades intelligent machines will have more capability than us. How will they interact with humanity and what are the risks?
AI brings many benefits but as with any rapidly advancing technology it needs ethical frameworks that protect society, in particular children and young people.