4.3 The easy problems and the hard problem
What implications do naturalism and strong naturalism have for the study of the mind? There are two. First, naturalists will deny the existence of souls, spirits and other psychic phenomena and maintain that the mind is part of the natural world, subject to natural laws. This view is shared by most modern philosophers of mind. Secondly, strong naturalists will hold that mental phenomena can be reductively explained in terms of processes in the brain, which can themselves be explained in terms of lower-level processes at the chemical and physical level. Although not as widely accepted as the first, this view is also common among contemporary philosophers, and, indeed, there is a strong case for it. All other high-level phenomena seem to be reductively explicable; why should the mind be any different?
But how could brain processes give rise to minds and mental states? How could collections of neurons and synapses generate beliefs and desires, hopes and fears, pains and pleasures? Much of contemporary philosophy of mind has been devoted to trying to answer this question – to constructing a naturalistic theory of the mind – and though we are still a long way from fully understanding how the mind works, there are plenty of theories as to how mental states and processes might be realised in the brain.
An important development was the idea that many mental states and processes can be defined functionally, in terms of the causal role they play in the operation of the mind – the view known as functionalism. So, for example, a belief is a state that is generated by perception or inference, serves as a premise in reasoning and prompts actions that would be rational if it were true; a desire is a state that is caused by bodily needs, serves as a goal in reasoning and tends to produce behaviour that will satisfy it; perception is a process in which information about the environment is acquired through the receipt of sensory stimuli; and so on. If we think of mental states and processes in this way, then it is not too difficult to see how a brain could support them. It would just have to possess states and mechanisms that play the appropriate causal roles.
Another source of inspiration was the development of computers, which provided models of how reasoning could be performed mechanically, through the manipulation of symbols. This suggested that the brain itself might be a biological computer operating on symbols in an internal language, and a new field of research opened up devoted to modelling mental processes in computational terms. Again, on this view it is not too difficult to see how brain tissue could support a mind; it would simply need to be organised in such a way as to implement the relevant computational processes. This approach may not be the right one (there are rivals to it) and many problems remain – in particular, that of explaining how symbols in the mental language get their meaning. But it does suggest that there is no obstacle in principle to providing reductive explanations of many mental phenomena.
When it comes to consciousness, however, the functional/computational approach runs into problems. Although some of the things we call ‘consciousness’ may be explicable in functional/computational terms (access-consciousness, for example), it is very hard to see how phenomenal consciousness could be. This problem has been recognised since the development of functional approaches to the mind in the late 1960s, but it was powerfully restated in the 1990s by the Australian philosopher David Chalmers (b. 1966), who has famously dubbed it the ‘hard problem’ of consciousness. I shall let Chalmers outline it himself, in an extract from one of his first papers on the topic.
Follow the link to David Chalmers' article ‘and read sections 2 and 3 (‘The Easy Problems and the Hard Problem’ and ‘Functional Explanation’). Then answer the following questions.
In paragraph 2 Chalmers lists various phenomena associated with the word ‘consciousness’. Which of the terms introduced earlier (‘creature consciousness’, ‘access-consciousness’, ‘transitive consciousness’, etc.) corresponds best to each of the items in the list? (Note that in some cases the correspondence is not exact.)
What does Chalmers mean by ‘experience’? (Paragraphs 5–6)
Why, according to Chalmers, are the easy problems easy? (Paragraphs 9–13)
Why is the hard problem hard? (Paragraphs 14–16)
The phenomena line up roughly as follows. The first (the ability to discriminate, categorise and react to stimuli) is a state of general awareness, so it falls under the heading of creature consciousness. The second, third and fourth items (the integration of information, reportability and internal access) involve the passing of information between internal systems, so they can be grouped under access-consciousness. The fourth phenomenon (attention) is a perceptual process, so it comes under the heading of transitive consciousness (awareness of something). A deliberate action is one performed with reflective awareness, so the fifth item (the deliberate control of behaviour) involves introspective consciousness (and perhaps also self-consciousness). The last item (wakefulness) corresponds to creature consciousness again.
He means phenomenal consciousness – the subjective aspect of our experiences, what it is like to have them.
The easy problems are easy because the phenomena to be explained are functionally definable and we can explain how a system exhibits them by describing the mechanisms that perform the relevant functions. These mechanisms might be described either in neurological terms or in more abstract computational ones. (In the latter case, to give a full explanation we would also have to specify the neural mechanisms which implement the computational processes, but that would be just another ‘easy’ problem.) Thus, for example, if we can identify the brain mechanisms that give us the ability to make verbal reports of our beliefs and other mental states, then we shall have explained the phenomenon of reportability.
The hard problem is hard because it goes beyond the performance of functions. Even when we have explained all the various functional processes that occur when we perceive things, we would still not have explained why these processes are accompanied by conscious experience – that is, why our perceptions have a phenomenal character. This looks like a much more difficult problem.
In this extract Chalmers is appealing to intuition rather than offering arguments, and you should not take his comments to be the final verdict on functionalism. But the intuition to which he appeals is certainly strong. Put simply, functionalism characterises mental states by what they do, rather than by how they feel. And it seems that a brain state could play the functional role of an experience without having any phenomenal character to it. Take pain, for example. Pains have a distinctive functional role: they are caused by bodily damage and cause characteristic behavioural reactions. Yet, it seems, a brain state could play this role without actually hurting. Think about Cog again. Suppose that damage to Cog's body activates an internal subsystem which registers the location and extent of the damage and initiates appropriate action, such as protecting the damaged area, withdrawing from the source of the damage and emitting the word ‘Ouch!’ from a speech synthesiser. Then when this subsystem is activated, it would be appropriate to say that Cog is in pain, in the functional sense, even though Cog doesn't actually feel anything. Similarly for other perceptions and experiences. It seems that a brain state could play the functional role appropriate to a visual perception – say, of a bright blue light – without having the phenomenal character normally possessed by such a state, or indeed with a quite different phenomenal character. So, it seems, functionalism leaves the mystery of consciousness untouched: how do some brain states come to have phenomenal character?
We can look at the same problem from another perspective. Suppose the MIT team wanted to give Cog conscious experiences. What would they have to do? Would it involve new programming? Or new circuitry? Or what? There are many things they could do to improve Cog's visual system – increasing the sensitivity of its camera-eyes, boosting the power of its visual processors and upgrading their software – but it is not clear what they could do to give its visual processes phenomenal character. Where would they start? If we have no idea how nature produces conscious experiences, then how can we set about trying to produce them artificially? It is worth noting that practically all the research programmes currently being pursued by the MIT team and other roboticists are devoted to equipping robots with specific functional capacities – capacities to discriminate, categorise, learn, perform everyday tasks and so on. None is devoted directly to making it conscious. Indeed, the MIT team say they try to avoid using the ‘c-word’ in their labs!
Let me repeat that you should not take this as the final verdict on functionalism. Many functionalists think that their approach can explain consciousness. When properly understood, these writers claim, the functional processes involved in experience do explain its phenomenal character. And, of course, even if functionalist explanations fail, a reductive explanation in other terms might still be possible. But it is undeniable that there is a serious problem here, and some people believe that consciousness is resistant in principle to reductive explanation. Here, they claim, strong naturalism reaches its limits.