The US Army has been forced to clarify its intentions for killer robots after unveiling a new program to build AI-powered targeting systems last month.
The US Army has been forced to clarify its intentions for killer robots after unveiling a new program to build AI-powered targeting systems.
The controversy surrounds the Advanced Targeting and Lethality Automated System (ATLAS). Created by the Department of Defense, it is a program to develop:
Autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process.
That text comes from the US Army, which has announced an industry day taking place next week to brief industry and academia on its progress so far, and to source new expertise.
To translate, ATLAS is a project to make ground robots that are capable of finding and shooting at targets more quickly than people can. This raises the spectre of lethal AI once again.
Ethicists and scientists are already hotly debating this issue. Some 2,400 scientists and other AI experts including Elon Musk and DeepMind CEO Demis Hassabis signed a pledge under the banner of the Boston-based Future of Life Institute protesting the development of killer AI.
The Army clearly realizes the controversial nature of the project, because it updated the industry day document last week to include new language:
All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017.
Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.
Directive 3000.9 is a 2012 DoD document outlining the policy associated with developing autonomous weapons. It says:
Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.
However, the policy also allows higher-ups to approve autonomous weapon systems that fall outside this scope under some conditions.
According to specialist publication Defense One, the US DoD is already fielding broader ethical guidelines for the adoption of AI across various military functions.
Meanwhile, tensions are high around the technology industry’s engagement with the military. Google faced an employee revolt after signing up for a Pentagon AI project called Project Maven to help automate video and image footage analysis. The company has since announced that it won’t renew Maven when it expires this year, and also refused to bid on the DoD’s massive JEDI cloud computing contract, arguing that it might not align with the ethical AI principles that it introduced last year.
Microsoft, on the other hand, continues to engage the DoD, announcing last October that it will sell the military AI technology in spite of protests from its own employees.
- Science & Technology
- The Pentagon promises to use artificial intelligence for good, not evil
- Dr. Rosalie Bertell Talks Geophysical Warfare – Short Four Part Series, Must See!! – Bye Bye Blue Sky
- Can DARPA CREATE an AI for unmanned-unmanned teaming?
- GOATS DECLASSIFIED: The True Story Behind The Men Who Stare At Goats