1.3 Representation and thought
It would be surprising if the meaning of our utterances turned out not to derive, in part at least, from the thoughts and other mental states that these utterances express. Were that so, language would be failing in one of its main functions. Ordinarily, an utterance of the sentence, ‘The German economy is bouncing back’, is intended to express the thought that the German economy is bouncing back, typically so that the audience will come to adopt this same thought. It is hard to see how this could be so unless the meaning of the utterance did not derive, in part at least, from the representational properties – the ‘content’ as it is often put – of the thoughts and other mental states of the speaker.
Understanding the nature of mental content is taken by many to be equivalent to understanding how – presumably by virtue of possessing a brain, a complex physical organ – humans are able to think about the world around them. How can a state of the brain be about the world outside the skull of the person whose brain it is? This is the mental equivalent of Locke's question about language, and equally daunting.
Recent developments in theories of human cognition have added impetus to the search for an answer to this question. Many philosophers and cognitive scientists have been impressed by the explanatory benefits of claiming that mental activity in humans is akin to the operations of a computer. Crudely put, computers operate by transforming symbols within them in a blindingly fast but rule-governed manner. According to advocates of the computational theory of mind, the same is true of us. On most versions of the theory, for a human being to be in a particular mental state is for their brain to contain symbols of a kind of brain language, ‘Mentalese’ as it is usually called. The alleged attractions of thinking of the human brain as populated by symbols of a language come to this: the computational theory of mind promises to explain how rationality is possible in a purely physical entity, as a living human body is assumed to be. Such an explanation has been a dream for many philosophers at least since Hobbes.
Not everyone accepts the analogy of human thinking with the operations of computers, but among those who do, the question arises of what gives the symbols of Mentalese their meaning. How can ‘words’ in a brain be about anything? How can they represent the world outside the skull? The meaning of the symbols of an actual computer – what makes it appropriate to call them ‘symbols’, in fact – derives from the interpretation imposed on them by computer designers and operators. The meaning of words in spoken or written language is also imposed, this time by the people using the language for the purpose of communication. But the source of the meaning of sentences hidden inside the human skull cannot be the interpretation imposed on them by an external interpreter, since there does not seem to be any such interpreter. So anyone who accepts the computational theory of mind is under an obligation to say what gives the symbols in the human brain their meaning. Many are sceptical of the computational theory of mind precisely because it is hard to see how this obligation could ever be discharged.
Discussion of mental representation, then, is often framed in terms of the meaning of inner symbols. But most of the difficulties that arise for those who accept the computational theory of mind also arise for anyone who (i) agrees that humans are capable of representing the world around them, but also (ii) wishes to claim that humans are in some sense essentially physical creatures subject to the laws of physics like other objects in the universe, and apt for study using scientific methods. Critics of the materialist world view are keen to stress how hard it is to show how both these assumptions could be true.