The Australian Defence Force has invested more than $5 million in researching the possibilities of artificially intelligent weaponry in an effort to design ethical killing machines.

Key points:

  • The ADF invests millions into researching how drones and other weapons could make decisions on the battlefield
  • Despite the challenges, the lead researcher says it could make wars more ethical
  • It will be the largest ever investment in AI ethics, according to the UNSW

If lethal AI weapons were used by armed forces, it would fundamentally shift the decision to kill from the hands of soldiers into the those of designers and engineers.

According to UNSW Canberra, a partner in the six-year project, it was the largest ever investment in AI ethics.

Lead researcher Dr Jai Galliot said it would investigate the current values of people who, in the future, could be deciding when a machine kills.

He said it would be up to designers to guide the military on using killer AI.

“These technical designers do need to realise that in some scenarios these weapons will be deployed and the sense of ethics and legality is going to come from them,” he said.

Armies already use limited “automatic” weaponry such as the phalanx gun on Australian Navy destroyers, which automatically detects and fires upon incoming hostile objects.