Korean University boycotted over ‘killer robots’ program
Over 50 leading AI and robotics researchers have said they will boycott a leading Korean research institute over its plans to develop AI-powered weapons.
In February, KAIST (the Korean Advanced Institute of Science and Technology) announced that it was launching a joint research project with defense company Hanwha Systems to develop AI technologies for lethal autonomous weapons that search for and eliminate targets without human control.
Professor Toby Walsh of the University of New South Wales, who organised the boycott, said “We can see prototypes of autonomous weapons under development today by many nations including the US, China, Russia, and the UK,” said Walsh. “We are locked into an arms race that no one wants to happen. KAIST’s actions will only accelerate this arms race. We cannot tolerate this.”
However, the university has responded to the planned boycott by denying that it plans to build autonomous weapons: “As an academic institution, we value human rights and ethical standards to a very high degree. KAIST will not conduct any research activities counter to human dignity, including autonomous weapons lacking meaningful human control,” said its president Sung-Chul Shin.
Walsh hit back with “There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern,” said Toby Walsh, the organiser of the boycott and a professor at the University of New South Wales. “This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms.”
Hanwha makes cluster munitions which are banned in 120 countries under an international treaty to which South Korea, Russia and China are not signatories. It is Korea’s main defence contractor and one of its 10 largest companies.
South Korea already has autonomous turrets in place along the border with North Korea, but due to self-imposed ethical restrictions – or maybe just the desire to avoid kicking off an accidental war – they require human intervention before they can deliver a lethal punch.
UK firm BAE already has a drone that technically can act entirely autonomously, including the use of lethal force. This and other examples have led some commentators to say that Pandora’s box is already open, because “telling an international arms trade that they can’t make killer robots is like telling soft-drinks manufacturers that they can’t make orangeade.”
Leading AI and robotics experts including Elon Musk have been trying to stop the development of lethal autonomous weapons for several years now via stopkillerrobots.org and the UN has hosted talks on Killer robots. But is it already too late? Would banning lethal autonomous weapons actually stop them? Boys will be boys, so maybe it’s about as likely as Americans meekly handing over their guns.