The United Nations convened a meeting to discuss the Korean research and development "killer robot"

The United Nations Convention on Certain Conventional Weapons Group of Governmental Experts met in Geneva, Switzerland on the 9th to discuss the possible impact of the development of deadly autonomous weapon systems, also known as the "killer robot."

More than 50 world-renowned artificial intelligence and robotics researchers announced on the 4th that they will boycott the Korean Academy of Science and Technology to protest the agency's plan to develop artificial intelligence weapons.

At the same time, an open letter has been circulating within Google Inc. for several weeks, with more than 3,000 Google employees signing the company to withdraw from an American Pentagon artificial intelligence military program.

The rapid development of artificial intelligence has brought profound impacts to various fields of society, and brought new ideas to the development of military combat systems. However, experts and scholars in the field of artificial intelligence and defense analysts believe that robotic riots are only the ultimate nightmare of science fiction, and the ethics, laws, and policies brought about by the military robot revolution, especially whether independent weapons can judge their own enemy and No, it is a real problem that people do not have to worry about and discuss.

Most experts and scholars have always believed that when using lethal weapons, people should be the key elements responsible for starting. The most appropriate discussion on what kind of manual control is not over, the relevant policy formulation has been left behind by the rapid development of artificial intelligence technology. With the advent of cognitive intelligence systems, more complex problems have emerged.

Singer, a researcher and defense policy analyst at the New America Foundation, pointed out in his book "Connected War: Robot Revolution and Conflict in the 21st Century" that the geopolitical and intellectual frontiers are constantly advancing and even pursuing interests in technology companies. It is an unstoppable force in the development of artificial intelligence. To put it simply, what most people are most worried about right now is whether they should let the artificial intelligence system that can "think" decide whether to kill.

Sharay, director of the new US Security Center technology and national security program, said that autonomous weapon systems may fail due to code problems or hacking attacks. Robots attack human forces, and humans are unable to respond or escalate local situations. The reliability of autonomous weapon systems is difficult to test, and machine systems that "think" may act in ways that human controllers never imagined.

Former US Deputy Defense Secretary Miller believes that the arbitrage of autonomous weapons systems in areas with fierce fighting, the temptation to have full responsibility for the machine may be difficult to resist, and technical possibilities may always allow people to make choices that are conducive to the situation.

For the autonomous weapon system, experts and scholars in the field of artificial intelligence frequently say “no” and refuse to conduct research and development in related fields. For example, the robot developed by the Korea Academy of Science and Technology has won the 2015 US Department of Defense Advanced Research Projects Agency Robot Challenge. According to the Korea Times, the Korea Institute of Science and Technology and the military company Hanwha Systems have set up a research and development center in order to develop artificial intelligence weapons that can “search and clear targets without manual control”.

In response, more than 50 top AI researchers said in an open letter that they would boycott any cooperation with the Korea Institute of Science and Technology until the agency announced that it would not develop any autonomous weapons that lack human control. The Korea Academy of Science and Technology subsequently issued a statement saying that they did not have plans to develop a "fatal automatic weapon system or killer robot."

The US Pentagon's Mevin Plan began in 2017 with the goal of accelerating the military's application of the latest artificial intelligence technology. Although both the Pentagon and Google have said that Google's products will not create an autonomous weapon system that can be fired without human operators, Google employees said in an open letter that participation in the "Mei plan" will create a barrier to Google's brand and talent. Unrecoverable damage, Google should not be involved in the war, Google and its contractors should never develop war technology.

At the 2017 International Conference on Artificial Intelligence, Musk, the Silicon Valley "Iron Man", one of the co-founders of Apple, Wozniak, and the deep thinking company of Alpha Go, Kasabis, etc. An open letter was signed calling on the United Nations to ban the use of deadly autonomous weapons and "killer robots" in warfare, just like the ban on chemical and biological weapons.

CD Car Phone Holder

CD Car Phone Holder,Car CD Slot Mount Holder,Car Phone Mount Holder,Car Accessory CD Phone Holder

Ningbo Luke Automotive Supplies Ltd. , https://www.nbluke.com

Posted on