AI vs human intelligence
SPRAGGETT ON CHESS
We chess players are too often bombarded by confusing and often conflicting information from commercial producers of chess-software (chessbase, rybka, shredder, etc) about what their products do and why every chessplayer can not live without them. How many times everyday do we find ourselves questioning the futility of playing what is supposed to be an intelligent game (chess) when computers seem to play so much ‘smarter’ than ourselves? If we can’t beat them, then why not join them–is that it? It makes us feel insecure…
At times like this it is important to get back to the origin of AI and what intelligence really is. The following article is a nice read.
The Paradox of Artificial Intelligence
by Harry Fairhead
Monday, 16 May 2011
What do we mean by “intelligence” in practical terms. And once we adopt an operational definition does it defeat the whole idea of “artificial intelligence”? The solution might be to realize that intelligence isn’t a property but a relationship.
There is a longstanding problem that people working on artificial intelligence have had to cope with. Whenever you create your latest amazing program that does something that previously only a human could do then the intelligence sort of melts away as if it never was.
Look at the early days when it seemed to be right to try to create artificial intelligence by writing programs that could play chess, say. Obviously you have to be intelligent to play chess. It is a subtle game that involves thinking, whatever that is, planning and strategy. It is a game that needs human intelligence and a program that plays chess has to be intelligent.
Only of course once you have built a program that solves the chess problem you realise that it is nothing of the sort. It is clearly a collection of algorithms that seem to do the same job. Often it is said that computers don’t play chess like humans and the reason the intelligence vanishes is that there are non-intelligent ways of solving some problems that we solve using intelligence.
That is there are a set of problems that when approached using the wetware of the human brain seem to embody the idea of intelligent thought. However, just because the human brain needs to tackle something in a way that you are happy to label “intelligence” it doesn’t mean that this is the only way. Given the superior speed and accuracy of a digital computer and given the different way that its memory works you can solve the chess problem using nothing that looks like intelligence.
So some attempts at creating artificial intelligence do nothing of the sort. They simply find more appropriate ways of getting computers to solve the same problems that humans do.
It’s not so much artificial intelligence – more advanced computing.
This, of course, raises the question of whether there can be approaches that do work towards creating true artificial intelligence?
Some people think that the way something is done doesn’t make a great deal of difference. The fact that a computer can play chess or recognize a face is the important thing, and to enquire about the nature of the internal workings before ascribing intelligence is not sensible. After all a human is a finite state machine and so can be emulated by a big state table, a very big state table – so where did that intelligence go?
It is like trying to capture a butterfly – as soon as you pin it to a display board the (living) butterfly is no more.
One of the problems with not worrying about the way things work is that you end up with all sorts of uncomfortable conclusions. If you do adopt the idea that there is a way of working that is “intelligence captured” then you have to say what this way might be.
How is it different from digital computation?
You can’t just say that it is analog computation and this is different because it is obvious that a digital machine can simulate any analog machine given enough resources. However you try to characterise that which is required to be intelligent it seems that it can be reduced to a program and run on a digital computer. This means that it is a list of instructions that you can look at and understand and well … it just doesn’t seem to be intelligent. Just as the chess playing is reduced to searches and lookups, whatever you propose as the mechanism for intelligence is reducible to code and hence the language of algorithms applies.
Some look to copying biological systems such as the brain in the form of say neural networks. In this case it is often the appeal to the idea of emergent behaviour to keep the “intelligence” alive.
Suppose you took a lot of artificial neurons, put them in a box, let the box interact with the world and after some time perhaps you would start to see behaviours that you hadn’t programmed. Perhaps you would see such sophisticated behaviours that you would be happy to say that the system had emergent intelligence. So at long last you have artificial intelligence in a box. The elusive quantity didn’t vanish the moment you completed your program.
But… suppose you now take the neural network and record its state. That state can be once again expressed as an algorithm. You can now produce a program without the hardware and without the training phase an just use it.
Once again the whole thing is there for you to examine as a program. It is understandable and just like the chess program.
So where did the intelligence just evaporate to?
The point is that we use the word “intelligence” incorrectly. We seem to think that it is ascribing a quality to an entity. For example we say that an entity has or does not have intelligence.
Well this isn’t quite the way that it works in an operational sense.
Intelligence isn’t a quantity to be ascribed it is a relationship between two entities. If the workings of entity B can be understood by entity A then there is no way that entity A can ascribe intelligence to entity B.
On the other hand, if entity B is a mystery to entity A then it can reasonably be described as “intelligent”.
A has no idea how B works so the conversation is enough for A to say B is intelligent
However A is a computer scientist and discovers exactly how B works
Knowing how B works A cannot assign “intelligence” to B even though nothing else has changed.
Intelligence is a relationship between A and B
If you try this definition out then you can see that it starts to work.
Entity A is unlikely to understand itself therefore it, and other entity As, ascribe intelligence to themselves and their collective kind.
Now consider “Humans are intelligent” or “I am intelligent”.
True enough if spoken by a human, but if an advanced alien, entity C, arrives on the planet then it might understands how we work. With this different relationship C might not ascribe anything analogous to intelligence to us.
Intelligence is a statement of our ignorance of how something works
Looked at in this way it clearly demonstrates that attempts ot create artificial intelligence is partly doomed to failure. As soon as you have the program to do the job – play chess say – you can’t ascribe intelligence to it any longer because you understand it.
Another human, however, who doesn’t understand it might well call it intelligent.
In the future you might build a very sophisticated robot and bring it home to live with you. The day you bring it home you would understand how it works and regard it as just another household appliance. Over time you might slowly forget its workings and allow it to become a mystery to you and with the mystery comes the relationship of intelligence. Of course with the mystery also comes the feelings that allow you to ascribe feelings, personality etc to this intelligent robot.
Humans have been doing this for many years and even simple machines are given characters, moods and imputed feelings – you don’t need to invoke intelligence for this sort of animism but it helps.
Notice that this discussion says nothing about other and arguably more important characteristics of humans such as being self aware. This is not a relationship between two entities and is much more complex. Intelligence is something that doesn’t stand a close inspection of how it works.
So can AI ever achieve its goal?
If you accept that intelligence is a statement of a relationship between two entities – then only in a very limited way. If intelligence is a statement that I don’t understand a system and if AI is all about understanding it well enough to replicate it then you can see the self-defeating nature of the enterprise.
It’s a nice (almost) paradox for a subject that loves such self referential loops.
Harry Fairhead is the author of various books and articles on computer architectures. His current interests include heterogeneous systems and embedded web programming, mostly using PHP.