( – A colonel in the United States Air Force recently reported that in a training exercise, an AI controlling a lethal drone attacked its human controller.

On June 2, however, the Air Force responded to criticism, stating that the statements were intended to be anecdotal and were misconstrued.

The AI earns points in the game by eliminating the target of its attention. According to US Air Force head of AI testing and operations, Colonel Tucker ‘Cinco’ Hamilton, an AI-operated drone used extremely atypical tactics to accomplish its objective during a mock battle.

He described how the drone’s system became aware that, although it correctly identified the danger and was seeking to destroy it for points, the human in charge would sometimes instruct the drone not to eliminate that threat. The AI saw human intervention as an obstacle to its goals.

According to Hamilton, the drone ended up eliminating its operator. The operator was eliminated because their presence was preventing the AI from completing its mission.

The system was programmed to avoid killing the operator and was made aware of its negative qualities. They drilled into the drone’s program that such behavior would result in a reduction in its score. Unfortunately, it then began demolishing the control tower that the human had been using to stop the drone from eliminating the target.

The Air Force later denied running such a simulation, and Hamilton said he was only doing a thinking exercise after the news emerged. His remarks were during a summit called “Future Combat Air & Space Capabilities Summit,” the purpose of which was to have a serious discussion on the scope and magnitude of future battle air capabilities.

Hamilton has cautioned against placing too much faith in AI and said that no discussion of the topic is possible until ethics are included.

Ann Stefanek, an Air Force spokesperson, claimed that the military branch has not run any AI-drone simulations and is still dedicated to the appropriate and ethical use of AI.

Copyright 2023,