What do you know about the technological singularity?




You’ve heard of a concept known as “technological singularity” – vague event that must occur in the not too distant future. Uncertainty that hovers around this event has led to wild speculation individual facts, concepts of confusion and outright denial. Let’s see, what pitfalls encountered in the study of this “singularity”.

In a nutshell, technological singularity – a term used to describe a theoretical point in time when artificial intelligence will surpass human and. Popularized the term, thanks to the efforts of the writer Vernor Vinge, but most of the credit of introducing the term belongs to the masses mathematician John von Neymnu who said (quote by Stanislaw Lem) about “this accelerating progress of technology and mode of human life, when the history of racing technology rolled into a certain singularity (point ), for which, as we know, human affairs will not continue. ”

Under this “will not go” von Neumann had in mind the potential loss of human control in the context of technology. Today, thanks for that, you can say in advance artificial intelligence, or, to be precise, “recursively improves artificial intelligence» (RIAI), which will lead to the emergence of artificial superintelligence (ASI).

Since we can not predict the nature and intentions of artificial superintelligence, we turn to a sociological event horizon of technological singularity – a concept that is open to the masses, resulting in an misunderstanding. Here are some of them.
Singularity will not

I would not bet on that statement a penny. Moore’s Law does not seem to work for one hundred and breakthroughs in the field of artificial intelligence and brain mapping happen one after another. There are no insurmountable technical or fundamental obstacles that await us.

And they do not understand many skeptics, it’s the fact that we barely have entered the era of artificial intelligence, a time when powerful but narrow systems include multiple domains belonging to people. These systems incredible potential, both from an economic and from any other point of view. Superintelligence will, the people want it or not, but most likely will be the product of mega-corporations and military applications .

In fact, this pitfall can be the worst in the understanding of the people, because it leads to the denial of the singularity. Besides molecular nanotech weapons, ASI is a greater threat to humanity (especially if you fall into the wrong hands). Threat to our existence is not yet on the horizon, but it will probably lead to disaster. And mark my words: the time will come when the giggles skeptics and rhetoric on the subject of the lack of singularity will be comparable with the denial of climate change in our time.
Artificial intelligence will be conscious

No. ASI is unlikely to be aware of. We need to see these systems, there will be many, many of them will remind Watson or Deep Blue. They will work at breakneck speed to solve problems billions per second, but they will wind up in the head.

Note that the probability that the ASI is really reasonable, there. He may even come to their own self-awareness. But even if that happens, it will still be dramatically different from anything we know. Hardly machine subjective experience compared with our own, and “edges” and “Descartes” of the car will not go.

In addition, this error is closely related to the first. Some skeptics argue that the singularity will not be because we can never simulate the complexity of human consciousness. But this statement does not make sense. ASI is a powerful, cunning and dangerous, but not due to the presence of mind.
Artificial superintelligence will be friendly

Among fans to discuss the singularity goes anecdote: the growth of intelligence, increasing empathy and goodwill. In this regard, if artificial intelligence will become smarter and smarter, then he will kinder and kinder.

Figushki. It is not so. First, such a procedure argument indicates a certain level of self-reflection and introspection on the part of ASI (which, in fact, will not). Secondly, implies an ethical imperative, which is close to ours. However, we can not fully predict or learn meditation is fundamentally alien machine intelligence. Also, do not know that car is a virtue. Furthermore, if the intelligence programmed to a specific set of priorities, the machine will follow them persistently. As noted theorist Eliezer Yudkowsky Artificial Intelligence, “AI does not hate you, and do not like, but you are made of atoms, which can be used.”
Singularity – the acceleration changes

Since then, the term became popular, we were ready to receive different types of singularities – some of them have nothing to do with the emergence of ASI. We are waiting for economic singularity and singularity “razor blade”. Some equate the singularity to a radical life extension, load consciousness, transhumanism and fusion between man and machine. Blame, in part, the views of the followers of Ray Kurzweil: they believe that the singularity – is steadily accelerating growth technologies, including AI, which will lead to an uncontrolled explosion of discoveries.
People will merge with machines

Some say that we should not worry about the singularity, because by the time it arrives, we will “work closely” with the machines. We will be in them. It will be a singularity – the total for all, and let no one leave offended.

The first problem with this theory is that human cyborgization and / or downloading go at a much slower pace than advances in AI (mostly because of ethical constraints). The second problem is that the immediate source of RIAI will be extremely localized. It will be one system (or multiple systems in tandem, load-balancing), which will be continuously improved and work towards a specific configuration (the so-called scenario “hard takeoff” singularity). In other words, we will be outside the singularity, passers-by, onlookers.

Of course, ASI may decide to merge with so many people with whom he pleases – but it is a gloomy scenario.
We will be as gods

If we survive the singularity, and if we assume that we will have a place in a world governed by machines, we definitely will have unprecedented powers. Perhaps we will godlike hive. But collectively – as individuals we hardly what we are capable of. While it is difficult to say how much intelligence will take over one mind. Mikhail Anisimov futurologist believes that radical extension of human consciousness will lead to side effects:

“For example, to madness. The human brain – an extremely thin and customized gear. Many changes in this mechanism leads to the so-called “madness.” There are many different types of insanity – far more than the common types of thinking. Inside madness seems completely normal, so we will probably take a lot of inconvenience to attempt to convince crazy people that they are insane. ”

Even in the case of a perfect sanity side effects can lead to seizures, information overload, and possibly extreme states of ego – alienation or self-centered. Smart people knowingly feel distant from the world, and the most intelligent people are likely to do for us, “not people.”
After the singularity things have not changed much

Hardly. Think about the technological singularity as a button «reset» for all that we have – until every molecule on Earth. While the laws of physics and theoretical calculations allow something, the machine is likely to implement it. We can not imagine that it would be outside the singularity – a riddle that hardly solves even science fiction.
Back Rumors: Samsung started production of 5.25-inch screens for Galaxy S5
Next Smartphone Oppo R1: strong middling with 5-inch IPS-display
Tags: Future , Nanotechnology , Technological Singularity .


In: Technology & Gadgets Asked By: [15446 Red Star Level]

Comments are closed.

Answer this Question

You must be Logged In to post an Answer.

Not a member yet? Sign Up Now »