4.3 What computers can't do?
So we've noted some of the apparently boundless applications of the digital computer and looked at two of its key abilities. But are there things that computers simply aren't capable of in principle? This is a much more difficult question than it appears at first sight. What I really mean to ask is this: are there things relevant to intelligent behaviour that computers, because of their very nature, simply can't do?
Consider this question for a few minutes. Do you think there are limitations on computers which mean that they are incapable of intelligence in principle? What might they be? Jot down a few notes about this.
It's tempting to offer quite facile answers such as 'a computer couldn't make a cup of tea'. But actually, if it was operating a suitable robot, making a cup of tea might just be the kind of thing a computer could do. I can't see why not. You might have wanted to say, 'well, a computer couldn't fall in love, or write a poem'. This may be true, but why not? Is it something to do with emotions? If so, what part do emotions play in intelligence? You might have thought that it's impossible for computers to be creative or respond flexibly to the unexpected. Again, maybe true; but why? Another common answer to this sort of question is that computers are programmed, they obey rules, and these rules are supplied by a programmer. The machine is only as intelligent as the program it's given', is the refrain. True. But does it matter? If a machine has an intelligent program then is it important where this came from?
As you can see, the question is a perplexing one.
Artificial intelligence, and particularly its Symbolic AI strand, has suffered a number of powerful attacks. Two names stand out: those of John Searle and Hubert Dreyfus. It would take too much space to sum up the arguments of these two thinkers in detail, but here is a taster.
You may already have heard of Searle's 'Chinese Room' argument, presented in the paper 'Minds, brains and computers' I quoted from earlier (Searle, 1980). For now, I just want to take four points raised by Searle and Dreyfus's critiques and raise them as questions here.
- Meaning – I argued earlier that computers are interpreted automatic formal systems. They manipulate symbols that stand for things in the world. But the interpretation of these symbols comes from us, from an outside human interpreter. Within the computer, the symbols have only purely formal meaning. For humans, though, intelligence is all about meaning. For a computer, the token 'knife' is simply a pattern of bits, nothing more. But for me, 'knife' has countless meanings, associations and connotations. Moreover, these change according to the situation I'm in. 'Knife' has an entirely different meaning for me when I am standing in the kitchen with one, confronting a pot of jam and a slice of bread, than to when I am in the bedroom confronting one in the hand of a jealous lover. How can computers act intelligently when the tokens they juggle have no meaning for them? Haugeland has called this 'the problem of original meaning'.
- Rules – Computers manipulate symbols according to rules. This is a good model of such activities as chess. But are all, or even most, intelligent activities just rule-following? What about medical diagnosis, mathematical problem solving, singing, holding a conversation, writing an Open University course? Can these be summed up in sets of rules?
- Representations – Computer systems depend on a model of the problem or situation they are tackling. This is easy enough in the case of a chess board, since all we have to represent are 64 squares and the positions of up to 32 pieces on them. But most real-world situations are very, very complex. Is it possible practically to represent these as a set of symbols? Can many real-world situations be represented in symbols at all?
- Intelligence – Is Symbolic AI dealing with too narrow a conception of intelligence anyway? In choosing activities such as chess and language manipulation as our paradigms of intelligence are we ignoring crucial features of intelligence? Were chess and other board games simply chosen as perfect examples of intelligence because they worked well on computers? After all, chess is not just an activity that is easy to model as a formal system – it is a formal system.
Note that even if every one of these doubts is well founded, these are not arguments against artificial intelligence or even against AI. They are arguments against strong artificial intelligence. Even if strong artificial intelligence is a doomed project, the construction of limited, but useful and practical, simulations of human intelligence on computers is still a worthwhile endeavour.