Skip to content
Skip to main content

About this free course

Share this free course

An introduction to artificial intelligence
An introduction to artificial intelligence

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

1 Looking backwards

As a field in which people can work or study, AI is barely a century old. This field has its roots in the early part of last century, largely through the work of early pioneers in areas as diverse as Philosophy, Mathematics, Science, Engineering and Psychology. Furthermore, the cross-over with Engineering and Scientific disciplines also coincided in the later decades of last century with the formation of a new discipline now known as Computer Science. The following is a brief, decade-by-decade timeline of prominent events within the relatively recent history of AI (not including 2020’s, since the course was completed just as this decade got underway).

  • 1950s
    • In his 1950 paper on ‘Computing Machinery and Intelligence’, Alan Turing poses his test of machine intelligence (see later in this section), and discussed other topics that have become central to AI, including machine learning, reinforcement learning and genetic algorithms.
    • A summer workshop is held at Dartmouth College in 1956. This is widely recognised as the event where the term ‘Artificial Intelligence’ was first publicly used to describe the new discipline.
  • 1960s
    • The ELIZA programme, written by Joseph Weizenbaum in 1965, is an interactive system that is able to hold a simple yet general text-based dialogue in English. This is an early example of ‘Symbolic’ AI.
    • Around 1966, work on the first general-purpose mobile robot, Shakey began at MIT.
    • The ALPAC (Automatic Language Processing Advisory Committee) report was released in 1966, and its largely negative findings on progress within Machine Translation led to drastic reduction of funding and subsequent work in this area for a number of years.
    • Terry Winograd develops SHRDLU in 1968, a natural language understanding program integrated with a robot arm in order to carry out English-based instructions in a simplified ‘world’ consisting only of children’s building blocks. This is another example of early successes in Symbolic AI.
  • 1970s
    • In 1972, researchers at Stanford University develop MYCIN, a so-called ‘expert system’, to diagnose patients and recommend treatment (in the narrow domain of blood infections).
    • James Lighthill’s 1973 influential and largely negative review of progress in AI in Great Britain, led to vastly reduced government support for AI research in the country, and publication of this report is a key event leading to the so-called ‘AI Winter’ (where mixed results contrasted with highly enthusiastic early expectations, resulting in reduced funding and decreased interest in AI as a field of research).
  • 1980s
    • The highly ambitious, USD 850 million Fifth Generation Computer project was initiated by the Japanese government, which aimed to develop computers capable of carrying out AI tasks (e.g. conversations, machine translation, visual recognition, reasoning).
    • Throughout the 1980s, there were a range of breakthroughs in work on Neural Networks (NNs).
    • Despite the threat of an ‘AI Winter’, during the 1980s, expert systems were developed that were commercially viable (e.g. the ‘RI’ system at Digital Equipment Corporation).
  • 1990s
    • Thanks to work by Tim Berners-Lee and others, the ‘WorldWideWeb’ is underway.
    • In 1997, IBM’s Deep Blue defeated the then reigning world chess champion, Garry Kasparov.
    • In 1998, the children’s toy Furby, an early example of AI technology for a purely domestic market, is released by Tiger Electronics. This is followed in 1999 by Sony’s AIBO, intended as an AI ‘pet’ robot (i.e. displaying a level of autonomy).
    • Throughout the 1990s, major advances in the use of Neural Networks (NNs, e.g. breakthroughs in handwriting recognition) help to establish Machine Learning (ML) as an independent discipline in its own right (although it is important to remember that NNs are only one approach within ML).
  • 2000s
    • Release of iRobot’s robot vacuum-cleaner, Roomba in 2002.
    • In 2004, the robots Spirit and Opportunity are deployed by NASA to perform autonomous navigation of the surface of Mars.
    • In 2006, Geoffrey Hinton publishes work advancing the field of Neural Networks, putting in place fundamental aspects of the emerging field of ‘Deep Learning’.
    • A team at Princeton University (led by Fei-Fei Li) assemble ImageNet, a key resource for subsequent breakthroughs in visual object recognition.
  • 2010s
    • In 2011, a Question-Answering system, Watson (developed by IBM), wins Jeopardy! playing against two former champions.
    • In 2016, AlphaGo (developed by Google DeepMind) defeats Go champion Lee Sedol.
    • Throughout the 2010s, major developments in Deep Learning, in both hardware and algorithms, has enabled the building of very large models trained from language and vision data, facilitating progress in several long-standing challenges (including visual object recognition and machine translation).

Note this timeline begins in 1950, although it is possible to find important precedents in the first half of the 20th Century. For a more detailed timeline, see the BBC’s ‘15 key moments in the story of Artificial Intelligence [Tip: hold Ctrl and click a link to open it in a new tab. (Hide tip)] ’.

However, some of the background ideas and themes of AI have been with us for centuries, if not millennia. In particular, a long-held fascination that people have with machines has given rise to imagined scenarios involving ‘artificial’ beings that are not part of the natural world, but nevertheless inhabit the world alongside us. Such artificial beings were often viewed as kinds of autonomous machines, yet their presence nevertheless signalled something otherworldly, if not downright sinister.