Menu

Blog

Mar 10, 2010

Why AI could fail?

Posted by in category: robotics/AI

AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Economic:
14) The growth of computer programs has led to an increase in the number of failures that were so spectacular that of automation software had to be abandoned. This led to a drop in demand for powerful computers and stop Moore’s Law, before it reached its physical limits. The same increase in complexity and number of failures made it difficult the creation of AI.
15) AI is possible, but it does not give a significant advantage over the man in any sense of the results, nor speed, nor the cost of computing. For example, a simulation of human worth one billion dollars, and she has no idea how a to self-optimize. But people found ways to break up their intellectual abilities by injecting the stem cell precursors of neurons, which further increases the competitive advantage of people.
16) No person engaged in the development of AI, because it is considered that this is impossible. It turns out self-fulfilling prophecy. AI is engaged only by fricks, who do not have enough of their own intellect and money. But the scale of the Manhattan Project could solve the problem of AI, but just no one is taking.
17) Technology of uploading consciousness into a computer has so developed, that this is enough for all practical purposes, have been associated with AI, and therefore there is no need to create an algorithmic AI. This upload is done mechanically, through scanning, and still no one understands what happens in the brain.

Political:
18) AI systems are prohibited or severely restricted for ethical reasons, so that people still feel themselves above all. Perhaps are allowed specialized AI systems in military and aerospace.
19) AI is prohibited for safety reasons, as it represents too great global risk.
20) AI emerged and established his authority over the Earth, but does not show itself, except it does not allow others to develop their own AI projects.
21) AI did not appear as was is imagined, and therefore no one call it AI (eg, the distributed intelligence of social networks).

7

Comments — comments are now closed.


  1. John Hunt says:

    Hi Alexi, I don’t think that you covered this but let me add one more.
    22) Intelligent civilizations consistently destroy themselves by some other technologic means before AI is achieved.

    I agree that all of the 21 items that you mentioned are not likely to turn out to be true except maybe for the political reasons. What do you think about the likelihood of my suggestion #22?

  2. Yes you are completely right! 22 is quite possible and should be added to polotical reasons. It couild be not so strong — civilization may just degradate to the level where no supercomputers exist, because of war or resourses crisis

  3. LAG says:

    Your initial statement, “I think most of these points are wrong and AI finaly [sic] will be created” is a fair assertion, but it would be more convincing if you actually provided a few counter-arguments to the subsequent points. Otherwise, it’s simply a weak appeal to some vague authority, which is non-scientific.

    Personally, I think the key will be found in an understanding of the mechanism of emergence of intelligence from complex dynamic computing systems. As long as that remains elusive in biological systems (and it is), it will prove elusive in other computing systems.

  4. Alexei Turchin says:

    The main objection to all these arguments is that we know fairly straightforward (from a theoretical point of view) way to create human-level intelligence — by scanning the human brain and its simulation on a computer and know how to strengthen joint intelligence groups of people, uniting them in Research Institute, etc. In addition, we know that the electronics operates at 10 million times faster than the human brain for reasons that signals in the axons spread very slowly, and electrical wires — the speed of light. Thus, the brain is scanned a few smart people and combining them into a productive network, we can run it at a rate millions of times higher than the normal speed of the people and get 30 seconds a result equivalent to the group work during the year.

  5. LAG says:

    Alexei, that’s an interesting notion, that AI can be achieved by doing nothing more than replicating the physical structure of the brain. So I guess that your reductionist argument is that human conscience is nothing more than a manifestation of architecture? I thought that line of reasoning had been put to rest long ago.

  6. In fact I do not know yet what is human conscience. May be we can get AI without conscience, or with psevdo-conscience — AI will claim that it has conscience , but in fact it will not.

  7. LAG says:

    I’m confused. How can presumably ‘true’ intelligence (artificial or organic) be claimed for any system that lacks an appreciation of right and wrong?

    By that, I don’t mean the system must accept any particular set of moral principles, but some set of principles is required in an intelligent being who will be expected to make moral decisions.

    And those sorts of decisions cannot be put off limits, else we’re still talking about a machine, complicated but still subject to external direction and hence unintelligent.