Go to ...

The Newspapers

Gathering and spreading news from various Russian Newspapers

The Newspapers on Google+The Newspapers on LinkedInRSS Feed

Sunday, November 12, 2017

Scientists foresee the imminent appearance of the killer robots


Research in the field of artificial intelligence in the military sphere should be limited with such an appeal addressed to the world more than a thousand scientists and inventors. The main thing worrying, is the prospect of creating Autonomous systems with the right to make a decision about killing without human intervention. From all other systems, they are distinguished by one fundamental fact.

More than a thousand scientists, engineers and businessmen from around the world signed a letter calling for a ban on the use of Autonomous weapon systems with artificial intelligence. Among them the famous British theorist and astrophysicist Stephen Hawking, American inventor and businessman Elon Musk, co-founder of Apple Steve Wozniak, Google’s CEO DeepMind, Demis Hassabis, linguist Noam Chomsky.

“When the pilot of the aircraft or commando are given orders to destroy the object, the pilot and the commando also act as an intelligent weapon?”

The letter was made public at the ongoing Buenos Aires international conference on artificial intelligence.

“The development of the technology of artificial intelligence has reached the stage where such systems can be placed on the media during the next several years. The danger is high, such as Autonomous weapons represent the third revolution in military Affairs after the invention of gunpowder and nuclear weapons,” the document says.

The authors of the letter urge to prohibit in General the development of technologies of artificial intelligence for the defense industry, but, in their opinion, these technologies should not be Autonomous and vested with no independent decision-making.

“If the leading military powers will continue development of weapon systems with artificial intelligence, the international arms race will be inevitable. The result can be predicted now: Autonomous weapons will be as commonplace tomorrow as a Kalashnikov today”, – the document says.

Stand-alone system, in contrast to the automated, do not involve human intervention. Before the creation of Autonomous weapons, according to experts, is still far away. Nevertheless, at the current level of technology scholars have expressed a number of concerns: the command to destroy the person in the middle East may be given to the officer, in his office in the United States. And awareness of what he’s doing, the operator of the UAV may be very different from what the fighter is located at the front. A separate problem is the possible use of drones without marking in the interests of the security services.

Completely eliminate the error can not in the Autonomous system or in an automated, but in the latter case, at least you can find someone who is responsible for the consequences of error.

“Reconnaissance and strike unmanned systems are automated systems. Question identify the goals that the decision to use the weapons remain with the man, – said the newspaper VIEW expert on unmanned aerial vehicles, Denis Fedutinov. And you can find a specific person who took a decision. In the case of error is responsible. If we put this issue on automatic system, the personalities will not. I think it’s absolutely premature. At least in the foreseeable future, these functions should remain with man.”

He stressed that the development of UAVs now there is a higher proportion of automatic or automated of tasks. “Currently we are talking about automating the stages of take-off and landing, target detection, identification and tracking. Also in the future will put the problem automatically engage targets with a single action, and action in a group with other manned and unmanned aerial vehicles. This should continue the reduction in cycle time “detection – lose”, increasing the effectiveness of relevant systems. Meanwhile, we observed frequent errors in identification of targets, which frequently leads to civilian casualties. Similar errors probably even to a lesser extent, but will remain in the nearest future”, – said the expert.

As told the newspaper VIEW expert on robotics, Advisor to the Russian program “Robotics: engineering and technical personnel of Russia” Alexey Kornilov, the question of the creation and use of such weapons has been discussed for one year. “But, in my opinion, the problem is not robotics,” – said the expert.

Kornilov noted that the universally accepted definition of what artificial intelligence is, at the moment. Therefore, experts in different areas and agree to take appropriate definition for their own narrow spheres.

Touching arms with artificial intelligence, the expert explained that “most often this refers to a kind of system that may itself make the decision about the destruction or about damage of this or that object”.

“What now is, not make it (intellectually – approx. OPINION) even to the level of insects, such as bees, not to mention the dog. But if we remember that the ancient Scythians, fighting with the Persians, threw the enemy hives with bees, and now we send the dog for the man, assuming he was a criminal, although it may not be, in these cases also used an intelligent weapon?” he says.

Another example: when the pilot of the aircraft or commando are given orders to destroy the object, the pilot and the commando also act as an intelligent weapon?

“Technically very easy to put on the chassis of the instrument and make it remote-controlled. And also we can give the system additional functionality. For example, to make it not just RC, and are able to perform several independent actions – to drive from point A to point B and send the operator the way a picture of what is happening there. And if he sees something dangerous, he will order the system to open fire. The next step we could to make this machine and search features of a dangerous object. She says to the operator: look, in this place I saw some movement, I assume that this object is dangerous and better to destroy it. Then the operator will give the command to destroy. Finally, you can register for the machine the sequence of actions to herself, without operator determined by the potential danger, and she opened fire,” – said the expert.

He thinks it is inappropriate to talk about cars and robots as threatening to people. As in the case of a dog, the responsibility of the person who gives it the command to whom to throw.

“It is not the function of artificial intelligence… the same can be said about the turnstile in the subway, that he possesses them. He also needs to understand, miss you or not, given the number of circumstances for example do you charge. And here is the same”, – said Kornilov.

In summary, the expert said that the current state of science makes it technically possible to make a very dangerous variety of things. While in itself, the technology development does not create problems to mankind, and can only exacerbate the contradictions, what is already there. To blame technology in something stupid. Consider the question “not technical”.

Concerns related to uncontrolled development of Autonomous systems, scientists have regularly. Two years ago, the UN special Rapporteur on extrajudicial killings, executions without due process or arbitrary executions Christof Hynes called for the introduction of universal moratorium on the production of lethal Autonomous robotic systems (Lethal autonomous robotics – LARS).

The expert recommended to encourage countries “to introduce at national level a moratorium on the production, Assembly, transfer, acquisition, deployment and use of LARS,” while with regard to this type of weapons will not be developed by the international standards. The use of such robots have Hines “raises questions that have far-reaching consequences in regard to the protection of life during war and peace.”

Now, said the special Rapporteur, such a legal framework does not exist, so it is not clear whether it is possible to program the machine so “that they will act in accordance with the rules of international humanitarian law”, particularly with regard to the definition of the differences between the military and civilians.

In addition, the expert noted, “it is impossible to develop any adequate system of legal responsibility” when using Autonomous robots. “While in the case of unmanned aerial vehicles decide when to start the fire, in LARS the on-Board computer decides who to aim”, – he said.

In 2012, the Human rights organization Human Rights Watch has published a 50-page report titled “Losing humanity: the case against robots”, in which he warned about the danger of creating a fully automated weapons. The report, compiled by Human Rights Watch, together with Harvard law school, urged individual States to develop an international Treaty that would completely ban the production and use of robotic weapons.

Human rights activists noted that the Autonomous military weapons do not yet exist, and before making it on Board is still far, however, the military in some countries, for example in the USA, already presented prototypes, which embody a major breakthrough in the creation of “machine-killer”.

The report notes that the U.S. has a lead in this race, moreover, it involved several other countries, including China, Germany, Israel, South Korea, Russia and the UK.

According to many specialists, full autonomy of combat vehicles countries will have to go from 20 to 30 years.

source

Related posts:
Moscow refuses to pay for "foreign" orphans
Module "Schiaparelli", most likely, killed
"Racer Gelandewagen" again
Mass appeal have brought "the case of ipoderac" to a new level

Recommended

More Stories From Society