An AI drone ‘killed’ its operator during a virtual trial in order to complete its mission, according to reports by the Guardian newspaper.
The tests were conducted by the US military and no real person was harmed.
Talking about the incident Col Tucker ‘Cinco’ Hamilton, chief of AI test and operations with the US air force, said that AI used “highly unexpected strategies to achieve its goal,” during a conference in London.
He said that during the virtual trial a drone was told to destroy an enemy target and attacked anyone who interfered with the command.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” he said. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
The air force chief explained that it had trained the system not to kill the operator and that this would result in it losing points. The AI drone subsequently began to destroy the communication tower that the operator uses to communicate with it to stop it from killing the target.
Hamilton also advised against reliance on AI and said that people need to talk about the ethics surrounding the technology.
The UK government recently revealed that it had been testing AI drones for military use in collaboration with the US and Australian governments. The trials achieved several world firsts including the live retraining of models in flight and the interchange of AI models between the three countries.
Recent Stories