5.3 Machines
Artificial intelligence has been a recurring theme in science fiction for a long time – since at least Samuel Butler’s 1872 novel Erewhon – whether it’s depicted as a dystopian threat to human civilisation, or a utopian ideal. But what’s the reality in the present day?
Can a computer program actually ‘think’? Can it be truly conscious? Or can it only ever simulate conscious behaviour? These are increasingly relevant questions in the 21st century. To begin to unpack them, this activity introduces you to the ‘Chinese room’ thought experiment, proposed by American philosopher John Searle in 1980.
Activity 5 The Chinese room
Watch the video, and see what you make of the argument. Then consider the question beneath and make a few notes.
Transcript: Video 10 60-Second Adventures in Thought: The Chinese Room
Do you agree with the conclusion that a computer program could only ever ‘simulate’ intelligent thought and language comprehension? Or do you agree with Alan Turing, that a computer which can pass itself off as human (thereby passing the famous ‘Turing test’) should be said to be intelligent?
Discussion
This continues to be keenly debated, so there’s certainly no easy answer here. But it’s true that the philosopher in the thought experiment doesn’t understand the conversation, despite outward appearances. He’s just following step-by-step instructions. Computer programs work in a similar way – meaning a computer could engage in intelligent conversation without actually understanding it.
If you’d like to investigate the arguments further, there are some resources in the Further Reading section you might like to explore.
The focus here is the demonstration of intelligence and understanding, but this idea can be extended to the related concept of consciousness. John Searle has argued that a programmed computer model of consciousness wouldn’t actually be conscious. Others have suggested that we just couldn’t be certain about this either way. And this lack of certainty extends past machines, to a related thought experiment of sorts about ‘philosophical zombies’. This argument posits that other people around us act like normal human beings, and display the outward characteristics of being conscious. But, without getting inside their heads, we don’t really know whether they’re feeling anything, or truly experiencing the ‘qualia’ mentioned earlier. All of their responses could be just following instructions, like the computer program.