Artificial Intelligence and the Future of Warfare

Both military and commercial robots will in the future incorporate ‘artificial intelligence’ (AI) that could make them capable of undertaking tasks and missions on their own. In the military context, this gives rise to a debate as to whether such robots should be allowed to execute such missions, especially if there is a possibility that any human life could be at stake.

To better understand the issues at stake, this paper presents a framework explaining the current state of the art for AI, the strengths and weaknesses of the technology, and what the future likely holds. The framework demonstrates that while computers and AI can be superior to humans in some skill- and rule-based tasks, under situations that require judgment and knowledge, in the presence of significant uncertainty, humans are superior to computers.

In the complex discussion of if and how the development of autonomous weapons should be controlled, the rapidly expanding commercial market for both air and ground autonomous systems must be given full consideration. Banning an autonomous technology for military use may not be practical given that derivative or superior technologies could well be available in the commercial sector.

A metaphorical arms race is in progress in the commercial sphere of autonomous systems development, and this shift in R&D effort and expenditure from military to commercial settings is problematic. Military autonomous systems development has been slow and incremental at best, and pales in comparison with the advances made in commercial autonomous systems such as drones, and especially in driverless cars.

In a hotly competitive market for highly skilled roboticists and related engineers across the sectors most interested in AI, aerospace and defence, where funding is far outmatched by that of the commercial automotive or information and communication sectors, is less appealing to the most able personnel. As a result, the global defence industry is falling behind its commercial counterparts in terms of technology innovation, with the gap only widening as the best and brightest engineers move to the commercial sphere.

As regards the future of warfare as it is linked to AI, the present large disparity in commercial versus military R&D spending on autonomous systems development could have a cascading effect on the types and quality of autonomy that are eventually incorporated into military systems. One critical issue in this regard is whether defence companies will have the capacity to develop and test safe and controllable autonomous systems, especially those that fire weapons.

Fielding nascent technologies without comprehensive testing could put both military personnel and civilians at undue risk. However, the rapid development of commercial autonomous systems could normalize the acceptance of autonomous systems for the military and the public, and this could encourage state militaries to fund the development of such systems at a level that better matches investment in manned systems.



By Mary ‘Missy’ L. Cummings
Director, Humans and Autonomy Laboratory, Duke University