The Future of Artificial Intelligence

18    18 Jun 2015 00:49 by u/Zaeb

4 comments

3

I don't think this AI-Singularity narrative, that seems to be getting so popular, is actually possible. First, there can be no such thing as an "artificial general intelligence" - it's long been known that there are classes of logic problems which no algorithm can ever solve arbitrary instances of, such as the halting problem.

More importantly, no mind (whether human or artificial) can ever intentionally design an AI smarter than itself. If I could design an AI smarter than me - smarter in that its mental algorithms can solve some problem P while mine cannot - then:

  • I would have to be able to understand that AI's reasoning process, and why it works (if I just write code without understanding what it actually does, there's little hope of it working)
  • I would be able to follow that process through a debugger as the AI tackles P
  • I would thus be able to understand the solution of P and be convinced that it is correct

But that would mean I'm already capable of solving P myself - because I never really had to program the AI at all to do any of that; I could have just simulated it with my own brain plus a lot of pencil and paper (albeit much more slowly). Contradiction.

We can only design AIs to do things more quickly that we can already do... and accelerated stupidity is not an existential threat.

0

I believe your argument is wrong. First, the halting problem and similar undecidable problems are irrelevant to the AI discussion. By general intelligence we usually mean intelligence that is like that of humans, i.e. general enough to solve a variety of problems. Humans can't solve the halting problem, so it doesn't matter if computers can't either.

As for your second argument. First, in many cases, a speedup alone is good enough to make a qualitative difference. Take for example programs that play chess. Clearly I could take to source code to the latest Fritz and its databases and play just as well as the program. However, I wouldn't be able to participate in any competition, because every move would take me years. Second, arguments using complexity classes like P are not really appropriate here, as P deals with decision problems. In AI we're more interested in heuristic solutions to optimization problems. Here it is rather simple to give algorithms that can produce smarter versions of themselves. Evolutionary algorithms do this all the time.

1

This is a great post. It is a different perspective than I have considered. Thanks for sharing!

0

Well, I'm not going to sleep tonight.