Thursday, January 8, 2009

Intelligence

Alan Turing, the father of computer science, proposed a test to determine if an artificial system could be intelligent. He proposed to play a game where a human interrogator would ask questions to either another human or to a machine which would be hidden out of sight (behind a brick wall or on the other side of the earth). If the human interrogator would be fooled 30% of the time the system would pass the test and ought to be called intelligent.

Now, this is a somewhat arbitrary line in the sand. Why 30%? How many humans must it fool? Moreover, why would we be so concerned about human intelligence? Can't we imagine other types of intelligence that would not be able to fool us? I don't think we should take the claim too seriously that this ought to be the working definition of intelligence, but rather a very useful illustration of what may be needed to build robots that are behaving in ways similar to humans. Turing believed that by the year 2000 this machine would be reality. It didn't turn out that way.

The Turing test did spark a lot of heated debate about whether we could ever build machines that can think in the same way as humans do. Most importantly, can we ever build robots that have a conscious mind? John Searle was the most vocal opponent of this idea. He invented interesting thought experiments which, according to his reasoning, would make it clear that such a thing could not happen. He imagined a "Chinese Room" with a human in it that would receive questions in Chinese. S/He would also have access to an enormous library with a (Chinese) answer to every conceivable (Chinese) question. Mechanically, the person would look up the answer, perhaps randomly picking from a large set of possible answers, and return it to the interrogator. Clearly this person has no idea of Chinese even though s/he would pass this variant of the Turing test.

One objection could be that it is really somewhat unrealistic to ask for a library with answers to every possible question; it would require more atoms to store than the universe has to offer. So perhaps we should restrict ourselves to systems that are limited in their capabilities of storing data. This seems like a workable sophistication of the Turing test, but I am pretty sure someone else will come with a system that may beat the Turing test in another way which would then require addition changes because "surely that system could not be really intelligent"!

In another famous thought experiment (the "brain prosthesis experiment" by Clark Glymour) one imagines replacing the neurons in one's brain one by one with electrical circuits. Each circuit models the input output function of a neuron perfectly. By the time we are done we have replaced the entire brain by a computer. We can do it again in reverse order to rebuild the human. Assuming for a moment this can be technically achieved, would we then have altered the human mind in any way? Many people (among whom Searle) believe this is the case. During the process they believe that one will gradually loose any conscious experience of the world, yet one's behavior will remain unaltered.

How about this thought experiment then? In 3000 AC a smart scientist comes up with a really clever way to build an intelligent machine. S/He first assembles a long strand of amino-acids which s/he places in a small protein cover. The strand of amino-acids will copy itself and code for new proteins to be assembled. The proteins become a robotic body (made mostly of carbohydrates and water) with a brain. S/He then decides to raise this robot among her biological children. This robot has a normal life because it looks so much like a real human being. And some day, it starts referring to itself as "me". It looks in the mirror and understands who that person is. When published, people cry out: "this is not a real robot" it was just a copy of a human being grown in a test tube". So the scientist develops new techniques to copy the exact same process by now with new synthetic molecules that have the same properties as carbon, hydrogen, iron etc. People are still not satisfied until the same procedure is now repeated with electronic circuits. Now what? Intelligent machine or not?

For me these examples drive home the fact that a mind has nothing to do with the substrate it is made from. It's about computation and information. And therefore, if our goal is creating human intelligence, yes, we will be able to build intelligent machines that may have conscious minds, real emotions and all the other good and bad things that make us human. These are not magical, but rather functional properties. We evolved them for a purpose. They are fascinating, and utterly un-understood today, but perhaps not un-understandable or non-replicable. Our brain is extremely complex, propelled by principles that we don't understand today. It therefore seems far to premature to claim that a consciousness mind is something reserved uniquely for human beings.

5 comments:

  1. .

    What of chemistry, Max? Will a 22nd century "conscious" computer mimic human biochemical phenomena? Will it feel depressed? Will it cry? Will it laugh at a well placed joke? Will it prove an amiable conversationalist? What opinion will it have at a child's first steps? Will it view q Van Gogh with pleasure? Will it compose a poem, a love letter? Will it know the joy of sex? Will it mourn? Will it fear death?

    JHM, PhD / Tokyo


    .

    ReplyDelete
  2. Yes, it all will. Those are all emotions and they have evolved for a reason. I am very much in awe that it is possible that all of this can emerge from very complex computation and information processing. But once we accept that (and we have to because we experience it) I don't see why it would not be possible on another substrate.

    ReplyDelete
  3. Have you ever watched the movie 'The Island'? It looks like a lot of issues people have to deal with if robots can think as humans. If robots have emotions, they may want to stop us creating AI robots.

    ReplyDelete
  4. Hi Max,
    I think the relation of intelligence and emotion is complex. On the one hand I think that thought came out of emotion, since emotion is indeed a quick (unconscious)assessment to prepare for proper action. But here also lies the difficulty of your statement that it is all computation and information. I can imagine we can mimick these evalutions through calculation, but the essence of an emotion is that it prepares for action. It creates an action tendency. Information and computation has no direction, no purpose. So to start with computation is a bit ackward, it's like starting at the end. dennet says the brain is a prediction machine. Fair enough, but predict to what intent? Without purpose calculation is meaningless. Calculate the best way to survive, to understand, to save the world, to end poverty. Do you think you can calculate purpose and meaning?

    ReplyDelete
  5. Hi Hans,
    Thanks for your response. I appreciate what you say, yet I think goals, utilities and especially preparing for action is at the core of modern approaches to AI. This is the approach of the ``rational agent'' that will always act in an optimal way to achieve its goals, even in the long run. For that, it has to solve a complicated planning problem in an uncertain and unknown environment. Humans have for some reason developed emotions to help solve this problem. The goals themselves must have evolved through evolution.

    ReplyDelete