Autonomous Weapons

Drone.JPG

By Andrew Quist

Autonomous weapons are weapons systems that use artificial intelligence to decide which targets to destroy. The technology for autonomous weapons currently exists. In fact, 30 countries already have autonomous missile defense systems deployed that are supervised by humans. With recent advances in technology, nations will soon be able to develop unsupervised autonomous weapons that can be used on the battlefield. 

Autonomous weapons offer the prospect of reducing civilian casualties in warfare through more precise targeting, but they also raise troubling moral questions. Since machines do not feel empathy, they can’t rely on our human intuitions of right and wrong when deciding, for instance, the importance of minimizing civilian causalities compared to the military benefit of destroying a target. These tradeoffs either need to be made by humans supervising the machines, or engineers will need to program ethical guidelines into the weapons’ algorithms.

Paul Scharre, a former U.S. Department of Defense official and current defense analyst at the Center for a New American Security (CNAS), is an expert in autonomous weapons. He has written a book on the subject titled Army of None: Autonomous Weapons and the Future of War, which was published in 2018. To learn more about autonomous weapons and the ethical questions surrounding their use, listen to this interview with Paul Scharre on NPR’s All Things Considered, and read this short report on the subject published by CNAS and written by Paul Scharre and analyst Kelley Sayler.