‘Killer robots’ to be taught ethics in world-topping Australian research project

The Australian Defence Force has a policy that decisions to kill will never be made solely by machines but rather will always include a human. But most senior military officers say computers with advanced artificial intelligence will increasingly help curate information, make recommendations and issue warnings that will shape battlefield commanders’ decisions.

The project will be partly a matter of figuring out exactly what people — including members of the military — deem ethical and then building that into autonomous military systems. The team will survey members of the military and the public to see what people think.

«I don’t think you can really build a robot to be ethical — or more ethical — without first understanding what we want from humans in war,» Dr Galliott said.

Machines can’t be programmed to respond to every conceivable situation. But the project might develop something like the legendary science fiction author Isaac Asimov’s laws of robotics that stopped his imagined robots harming humans.

«Asimov’s laws can be formulated in a way that basically represents the laws of armed conflict actually,» Dr Galliott said.

Military personnel don’t tend to have totally prescriptive ethical rules, but follow principles such as utility, proportionality, necessity, distinction. Does a strike need to be carried out to achieve the military aim? Is it proportional to the aim? Does it sufficiently distinguish between combatants and civilians?

To start to embed some of these ideas into the development of technology, the project will have «ethicists sitting next to these engineers at certain points in time and have them workshop scenarios».

There is strong backing for a worldwide ban on autonomous weapons, including from technologists such as Tesla chief Elon Musk and the late famed physicist Stephen Hawking.

But Dr Galliott said autonomy could make weapons systems better.

Much of the challenge in transposing ethics into software revolves around what Dr Galliott calls «constraints», which are inhibitors to launching lethal force rather than initiators of it.

«If an autonomous weapon is targetting a road or something like that and an ambulance drives by with a Red Cross logo on it, we would want to design that weapon to detect that and automatically stop.»

Surveys will ask military members what they need out of robots to make the ADF more capable while remaining ethical.

«We can ask them questions like … do you want a screen that shows you the machine’s confidence in this particular targetting decision? How do people feel about working alongside a robot that could potentially report a violation of the law?»

He praised the ADF for taking the issue seriously, saying that weapons that didn’t match the values of a society tended not to last very long.

David Wroe is defence and national security correspondent for The Sydney Morning Herald and The Age.

Most Viewed in Politics

Loading

Источник: Theage.com.au

Источник: Corruptioner.life

Share

You may also like...