Go to ...

The Newspapers

Gathering and spreading news from various Russian Newspapers

The Newspapers on Google+The Newspapers on LinkedInRSS Feed

Saturday, December 3, 2016

Futurist has compiled a guide to the end of the world


Muscovite Alexei Turchin, a physicist and futurologist, member of the major research conferences, an unusual hobby — organize and create visual maps of different scientific phenomena. One of his cards, as the periodic table, brings together all the possible reasons that can lead to the complete destruction of human civilization before the end of the XXI century. Besides well-known global warming or nuclear winter” there is, for example, an accident during a scientific experiment, “grey goo” because of out of control medical nanobots, etc. of Interest to the map already displayed at the Center for the study of global risk in Oxford.


photo: Elena Nikolaeva

Alexei Turchin.

— Alexey, how did the idea to do these cards?

Ten years ago I read in the Oxford collection is a series of articles by the Swedish philosopher, Professor nick Bostrom and American specialist in artificial intelligence Eliezer Rudkovskogo global risks that could lead to the extinction of humanity. Russia has so comprehensively with this topic no one did, although in the 1980-ies we had a big school associated with nuclear weapons, originated the theory of “nuclear winter”; since the 1990s, microbiologist Michael Sapotnitsky wrote about biological weapons and AIDS were separate experts on asteroid hazards and global warming… I did some translations for Russian, then wrote his own book about the impending global disasters of the XXI century, but I realized that the maps will soon find their way to people than the Tome of 300 pages. In a world of excess information, and to read most especially once. Now I have several including foreign supporters, who hang my card instead of posters, the Center for the study of global risk in Oxford…

I think today is the most comprehensive scheme in the world: there is nothing known to science that would not be it is taken into account, I regularly update, please read the volunteers to then reinforce — with criticism of the “collective mind”.

— There should be no judgment…

— I deliberately do not take a mythological scheme.

— Seriously, usually the people even about his own death thought of as something that will not happen with them. Moreover, few people concerned about the Earth’s collision with an asteroid or the disappearance of the Universe into a black hole. As futurologists justify the seriousness of the problem?

Research professionals may not be so spectacular as run periodically in the media “horror stories”. But from a scientific point of view — much more fun.

First we consider the risks that arise for a hundred years because of the rapidly developing supertechnologies — nanotechnology (control of non-living matter, nanomaterials); biotechnology (the control of living matter: growth of organisms, the creation of tissue, viruses); artificial intelligence (information management and everything).

Any of these technologies can be used for good and for evil. For example, due to the success of the biology unit biohacking is not difficult to grow genetically modified virus that can destroy a large part of humanity. Now scientists know what the letters in the genetic alphabet needs to be changed in the virus that they acquired an absolute mortality. A terrible scenario is to cross the flu virus with HIV…

My card is intended for professional futurists and decision-makers: UN, profile committees of the state Duma — as well as the concept of “nuclear winter” eventually formed the basis for treaties on weapons reduction. When there was Ebola, all demanded the elimination of the threat.

— And that now threatens us the most?

— In the map reflected about a hundred options of disasters — natural, technological, merging the human with the natural. At the core of each are different lethal factors, which can spread quickly throughout the Land: the shock wave, radiation, heat exposure, poisoning, etc. a Lot of hypothetical scenarios that are very unlikely. The base — about ten, including extreme warming due to the emission of methane, a pandemic, solar flare, an accident in a science experiment. For example, in 2003, the American Professor David Stevenson came up with the idea through melted the earth’s crust with giant drops of the warmed iron to send to Land a scientific probe, but such a pitfall can lead to a giant eruption.

All the chances that we will destroy ourselves, not the natural disaster, much more. And every year technological progress, this risk increases in proportion to Moore’s law (Gordon Moore was one of the founders of Intel. — “MK”), which States that processing power doubles every two years. And like the power of computers increases and the power of bio-and nanotechnologies.

— And what about the notorious fall of an asteroid?

— The success of astronomy and observation allowed people even without any spending on space defense, bombs and lasers tremendously reduce the level of such risk: we understand that in the next hundred years, nothing major we will not fall — the year of the Earth one in a million chance to collide with the asteroid. It is believed that the likelihood of nuclear war is 1% per year, that is 10 thousand times higher.

— On the basis of which to make such conclusions?

— According to the observations, one crash for about a hundred of dangerous situations. Accordingly, knowing that nuclear weapons exist for about 70 years, and how much of this time was when he was nearly used, we can calculate the probability of disaster. And situations, even apart from Hiroshima and the Cuban missile crisis, there were many.

In 1983, at the height of the cold war, our companions mistook the glare of the Sun for the launches of American missiles, the warning system gave a command for retaliatory nuclear strike, but after analyzing the situation, Russian rocketeer Stanislav Petrov cancel it. In 2006 he was awarded the UN for the salvation of humanity from a potential nuclear war.

In 1984, Reagan was testing a microphone before a performance at radio, “joked” that five minutes will start the bombing, and Russia will be destroyed, and the statement aired, so the Soviet Union could cause a real backlash.

According to some, full combat alert was sounded, and when Brezhnev died. Almost every year something was repeated.

In 1995, Russian President Boris Yeltsin came down to the bunker and was ready to attack the US, taking the Norwegians launched a weather rocket for the beginning of hostilities…

By the way, most of the scenarios of nuclear war is still not leads to the death of all mankind. Probability of the tragic end is estimated at only 0.01%. Another question, if it matches several factors — the food is infected, the pandemic… then it may be a drop in the population, and humanity “zeroed”.

— High stands lot of talk about global warming…

Something that was discussed by international organizations, is a compromise, as it averages the different points of view. A temperature increase of 2-4 degrees will not lead to the destruction of humanity: some areas will flood, someone will move somewhere a dam build… Unlikely, but possible scenario of planetary disaster is irreversible global warming with increasing temperature up to 100 degrees Celsius and more if I break free large amounts of methane in the Arctic, and its greenhouse effect is many times greater than the impact of carbon dioxide.


photo: pixabay.com

— It is known as in 1938 the Americans panicked after hearing on the radio “War of the worlds” by Hg wells. Many seriously believe the invasion of Martians. What does your map about the aliens?

— Most likely, there are no aliens, especially in the vicinity of the Earth. We can find any radio signals on “bad advice” and use them.

— How is it?

Refers to artificial intelligence, which will first be asked to “download” it, as is the case in viral mailing list and then take over the Earth — will program here, will penetrate the transmitting station starts to send itself further.

— How taught in such a scenario?

— It follows from what we know about the behavior of viruses and natural selection. Evolution contributes to the proliferation of systems that can best survive, and similar artificial intelligence will be the most tenacious. Because in the mail we have half — spam, trash…

— And that there are major scholars who allow such a threat?

— If you do not focus on an extraterrestrial origin, it is generally about the risks of artificial intelligence do not speak only the lazy. Topic was raised the same Bostrom, Yudkowsky, one of the founders of space tourism Elon Musk.

In 2015, Musk is even launched OpenAI project, whose mission is to protect humanity by creating an open system of friendly artificial intelligence: the researcher will share the results of their work, and in the end will be no one, and hypermegaporn artificial intelligence (AI), that is, different AI will balance each other. Although, for me, it’s like giving all the atomic bomb…

Was the thematic hearings in the UN, in the Parliament of great Britain. Except that bill gates, perhaps, more skeptical regarding the time of creation of artificial intelligence.

Some see the main risk of AI in the reduction of the usual jobs for the people. But a more radical look close to the mentioned “space.” Artificial intelligence as software which will reinforce itself and will soon surpass human ability, and then will be able to destroy or “wrong benefit” people. For example, someone inadvertently say, “Let’s count the number of PI — and the program will begin to build the supercomputer to the maximum power, using all the resources of the earth, including humans. This so-called scenario paperclip maximizer (maximizer staples) — crazy stuff, which is not interested in anything but staples, and she makes them all in a row.

In fact, these risks are closer than some people think. Already there are neural networks, Autonomous vehicles. And there including the ethical dilemma: if the path of the rushing drone suddenly occurring obstacle on the same road one person, and the other five, who he has to crush?..

— After this it’s time to talk about nanobots — they have something terrible? While more often heard as the unseen machines will be delivered to patients bodies drugs, painless to heal us from the inside…

— There are two potential class of nanorobots: first, nanomechanism, which can move through the blood vessels with cargo, and secondly, advanced — something between a mechanism and a living cell, capable of molecular manufacturing should be replicated (replicate). About them in the mid 1980s, wrote the forgotten “father of nanotechnology”, American scientist, engineer Eric Drexler, then an American researcher Robert Freitas.

By the way, Drexler has put forward the theory of “gray goo” scenario of disaster, when instilled into human nanobot that needs to multiply, say, a hundred times, because of a failure does not stop and continues on its way to decompose into its constituent elements and making them their own copies of the virus. The human body, the biosphere — all serves as a fuel and building material. Another option — someone will throw the nanorobot to the enemy for military purposes, and then it will begin to multiply out of control. Although at the moment we are closer to the risks of nuclear and biological weapons than the nanobots, and artificial intelligence.

— So, no wonder enthusiasts build bunkers and dig dug?

It is possible to build a highly specialized protection, if we know what type of disaster awaits us, but to create a universal hopper, which will save you from viruses, and from the rather large asteroid, and from the nanobots, and the radiation, is very difficult. That is why Elon Musk offers to colonize Mars and to arrange “the second humanity”. However, it raises new risks. As history teaches us, a necessary condition for people to become hostile to each other, is to divide them into two groups. If you establish a colony on Mars, over time, can create tension with the Ground, as it was between the Old and the New world. And mutual interplanetary war will reduce the chances of our survival.

Card risk prevention of the destruction of humanity I have — it has several plans, including the establishment of superion, shelters in Antarctica, increasing the survivability of the person using modern technology (for example, Intel has recently financed the development of macromolecules by the method of computational technologies, which, sticking to almost any kind of viruses would stop their development in the human body). You can leave the traces of humanity (disks, DNA samples) in the hope that they will find another race. Of course, there’s always a counter — it might not go as high.

A pragmatic question: what will you live on, if for ten years, gather information, and then for two months draw cards?

— I have a collection of naive art — I buy and sell paintings by self-taught artists. Of course, I could make more money, but deliberately wasting time on the cards, because I believe that it is important.

They might say, “what a Weirdo”. No fear that things can remain at the level of personal passions, you will not be taken seriously?

— What I’m saying, in General, reflects the point of view of a group of scientists. And if you don’t accept my card — something similar will make someone else. But I tried to dismantle the complex simple and, in my opinion, quite accurately and logically reflect the basic things, as did Mendeleev in his table. More important to me than the salvation of humanity than the issue of temporary worldly fame. Better let the world live, what I’m going to be famous.

Related posts:
"From waste astronauts were taught to take salt and water"
Manure helped to find the exact location of the transition of Hannibal across the Alps
The world's first satellite quantum communication has received the name of the Chinese philosopher
Mathematics put an end to three centuries of dispute about the nature of human hate

Recommended

More Stories From Science