Skip to content
Skip to main content

About this free course

Download this course

Share this free course

Machines, minds and computers
Machines, minds and computers

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

5 Conclusion – Symbolic AI and Cybernetics

When I worked in artificial intelligence in the mid-1980s, Cybernetics – if we discussed it at all – was dismissed with a shrug. It was seen as a movement whose time had passed, a rather diffuse set of theoretical pursuits, which had little to show in the way of concrete achievement. Symbolic AI – writing intelligent software for digital computers, based on the principles of representation and search – was the way ahead. AI achieved real results. Cybernetics was just empty theory.

I now think this view was arrogant and quite wrong. But it is true that Cybernetics went into eclipse in the 1960s; AI came to the fore and stayed there. Why exactly this happened is really a matter for historians and sociologists of science. I can think of four possible reasons.

  • Multidisciplinarity – Cybernetics was conceived from the start as a multidisciplinary project, taking in mathematics, computing, engineering, social sciences and the humanities. Although we'd all probably agree that, in theory, this is an excellent approach to such a challenging problem as understanding intelligence and replicating it in machines, it was probably hard to sustain in the research environment of the time.
  • Theoretical aims – Cybernetics' central aim was understanding. There was less emphasis on building useful intelligent artefacts. AI promised immediate delivery of working intelligent systems, and produced some impressive and encouraging early results.
  • Technology – The computing technology of the time may have been too weak to be an adequate vehicle for cybernetic systems.
  • Competition for funding – In the 1960s, as now, competition for research funding was intense. There may also have been personal animosity between cyberneticists and Symbolic AI researchers. In 1969 the noted scientists Marvin Minsky and Seymour Papert published Perceptrons, a devastating critique of certain computational models of nervous systems, which showed, with unanswerable mathematical arguments, that such models were incapable of doing certain important computations. Perceptrons killed most research into neural computing for fifteen years. Much later, Papert confessed in an interview:

Yes, there was some hostility behind the research reported in Perceptrons ... part of our drive came ... from the fact that funding and research energy were being dissipated on what still appear to me ... to be misleading attempts to use connectionist methods in practical applications. Money was at stake.

Perhaps most significantly, Symbolic AI and Cybernetics had different starting points. Each began with a quite different view of the nature of intelligence and how it is manifested, and with radically different models. As I noted in the summary earlier, Cybernetics was concerned with feedback, the body in its environment, and purposeful activity; Symbolic AI with digital computation, symbolic representation, rules and abstract thought.

But Cybernetics has not gone away. Indeed it has returned in new guises and under new names. Many theorists now believe that Symbolic AI's indifference to the body and to activity was (and is) its greatest mistake. Animals (including humans) are active: they move around the world, responding to it at every moment. Intelligence is necessary for our never-ending engagement with a complex, dynamic and challenging world. The intelligent mind is not some abstract, remote controller of the body: in every second of life, both mind and body work together to produce useful, purposeful action. Many modern approaches to artificial intelligence, therefore, embrace two new key ideas: embodiment (an intelligent system has to have a body) and situatedness (this body must interact with, and cope with, a challenging environment, in real time). Since robots fulfil these two criteria perfectly, the future of AI may increasingly belong to robotics.