1.4 Background: Russell and Norvig’s four kinds of AI
One of the key textbooks in the field of AI is Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig (Russell & Norvig, 2021). While the field of AI has emerged out of an eclectic and complex combination of disciplines, in the initial chapter of this text, Russell and Norvig usefully categorise AI in a way which makes some sense of this combination. They do this by crossing two important distinctions: (1) one between ‘thinking’ vs. ‘acting’, and (2) another between more typically ‘human’ level of behaviour vs. a more ideal or ‘rational’ level of behaviour. Crossing these distinctions gives rise to four categories of AI research: ‘thinking humanly’ vs. ‘acting humanly’ vs. ‘thinking rationally’ vs. ‘acting rationally’. Russell and Norvig argue that these categories can be used to cover the full range of work in AI since its inception in the early twentieth century. It is important to note that for Russell and Norvig, AI is the study of computer agents. Such agents ‘operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals’ (Russell & Norvig, 2021, p.4). Some brief notes about the resulting categories of AI work follows below. However, for a fuller explanation of key concepts that are relevant to this course, as well as more details on the history of AI, you are encouraged to read the first two chapters of Russell and Norvig’s textbook.
- Thinking Humanly: A very strong thread throughout the history of AI has been the idea that modelling human thought processes could enable us to somehow replicate such processes in computer systems. This has been one of the goals of cognitive science, an interdisciplinary pursuit made up of Psychology, Computer Science, Philosophy, Linguistics and Anthropology.
- Thinking Rationally: Another clear thread has been the attempt to formulate so-called “laws of thought”, often expressed using special systems of symbols deriving from mathematical logic, and thereby build computer systems which are able to reason similarly to humans (assuming logic adequately models human thought). One major challenge for such approaches is that human thought is typically full of contradictions and uncertainties, and so the sorts of logical rules used in these approaches are not such a good match for actual thinking.
- Acting Humanly: An obvious approach to demonstrating artificial intelligence is to replicate intelligent human behaviour. This is effectively the approach taken by Alan Turing, the British mathematician, whose so-called ‘Turing test’ attempts to determine whether a computer system is intelligent through a relatively simple test: if a human observer is communicating with two actors whom it cannot see (e.g. via a text-based ‘chat’), one of these actors being a computer and the other being a human, then the computer ‘passes’ the test if the human observer cannot tell which actor is a computer. In other words, by behaving indistinguishably from a human being, the computer has exhibited intelligence. Interestingly, this test brings together almost the entire suite of capabilities which AI has been focusing on since its conception, including knowledge, reasoning, language understanding, and learning.
- Acting Rationally: For Russell and Norvig this involves acting so as to achieve what one believes to be the best outcome. Russell and Norvig themselves favour this approach to building so-called rational agents, pointing out that it in fact includes many of the other approaches above. Consider that acting rationally is a matter of doing what is ‘right’, given the situation you are in, and this could include thinking rationally, but actually covers much more of what an agent might need to achieve in order to successfully negotiate its environment. Further, many of the suite of capabilities which AI is concerned with, and which are brought together in the Turing test, can also be covered by an approach to AI that focuses on rational action. Concerns which Stuart Russell has voiced about AI will return in Week 2, when you consider the risks posed by artificial rational agents which are focused on achieving goals, perhaps without due concern for other agents in the environment, such as humans.