Skip to content
Science, Maths & Technology

Artificial intelligence

Updated Thursday, 22nd September 2005

We ask whether computers can think in a human fashion

This page was published over five years ago. Please be aware that due to the passage of time, the information provided on this page may be out of date or otherwise inaccurate, and any views or opinions expressed may no longer be relevant. Some technical elements such as audio-visual and interactive media may no longer work. For more detail, see our Archive and Deletion Policy


The Story So Far
Robots that are autonomous already exist: they can learn, communicate and teach each other. They can navigate their way around our world and be linked to extremely powerful computers that will give them a processing capacity well beyond that of humans. How did scientists develop the technology to produce AI machines?

Mankind has always been fascinated by how the mind works and fascinated by the idea of creating intelligent machines. However, it wasn’t until the development of the electronic computer in 1941 that technology was available to create machine intelligence.

The term 'Artificial Intelligence' was first coined in 1956 by an influential figure in the field, John McCarthy. He organised a two month workshop at Dartmouth college bringing together researchers interested in neural networks and the study of intelligence. Although this workshop did not lead to any new innovations, it did bring together the founders in AI and served to lay the groundwork for the future of AI research. Following this workshop, an intensive wave of AI research began.

Centres for AI research began forming, such as Carnegie Mellon and MIT, and started concentrating their work on two main themes:

Firstly, creating systems that could efficiently solve problems, by limiting the search, such as the Logic Theorist, (considered as the first AI program), Geometry Theorem Prover, and SAINT.

Secondly, making systems that could learn by themselves, for example, the GPS, developed by Allen Newell and Herbert Simon, which was capable of solving a greater extent of common-sense problems.

McCarthy continued to make significant contributions to AI, particularly in 1958 when he wrote a high-level programming language called LISP, which is still one of the most dominant AI programming languages.

A grant received by MIT from the US government in 1963 served to increase the pace of development in AI research. The MIT researchers, headed by Marvin Minsky, demonstrated that when confined to a small subject matter, computer programs could solve spatial problems and logic problems. Other programs were also developed, for example, one which could solve algebra problems and one which could understand simple English sentences.

AI began to specialise in the 1970s into areas such as expert systems, language analysis, knowledge representation and computer vision. This served to strengthen the backbone of AI theories. In the 1980s, AI began to move at a faster pace. The public became more comfortable with science and technology as the popularity of personal computers rose and sales of A.I. related hardware rose.

AI was put to the test for military use in the Gulf War in the early 1990s, where it was used for both simple tasks such as packing transport planes, and complicated tasks such as the timing and co-ordination of Operation Desert Storm. Advanced weapons such as "cruise missiles" were equipped with technologies previously studied in AI fields such as robotics and machine vision.

Now in the 21st century, AI is gradually moving more and more into people's everyday lives, especially as the interest in computers and computer games grows. New artificial intelligence advances are constantly becoming available - so who knows what the future might bring?

The Arguments: Aaron Sloman
Aaron Sloman Professor Aaron Sloman works in the School of Computer Science at the University of Birmingham. He is primarily a philosopher and his objective is to find out what minds are, what sorts of minds are possible and what sorts of physical machines can implement them, whether made of meat, silicon or anything else. He says that A.I. is the best way to study philosophy of the mind by thinking about how to design working minds or fragments of working minds.

On why A.I. scientists haven't already far exceeded the capacity of brains:
"The way I see it, the real problem is we don't know what it is that we are trying to model. Human beings and other animals have all sorts of capabilities but trying to characterise them is very hard. You look at a piece of paper, what happens when you just look at a blank area? If it's Picasso looking, something very powerful will start happening, even a young child with a paintbrush will look at a blank sheet. Now there are many things that we don't know how to characterise, so we don't know what we're trying to explain, and that's one of the reasons why it's taken so long. The fact that computers are very fast doesn't necessarily help us understand the problem that we are trying to use them to solve."

On what intelligence means:
"Well, I think it is a large collection of capabilities in humans and different subsets. It can be found in other organisms. Now one of the problems is that some people try to define intelligence in terms of what you could observe that shows that something is intelligent. The problem with that is that what you observe might be explained in a number of different ways. So you could have something which looks intelligent, but if you know how it works, it isn't. So part of the problem is how to build things that don't just look intelligent (simulated sheepdog)."

On whether machines can have the same kind of capabilities as humans:
"In order to come up with machines that have the same kinds of things as humans, we have to do a huge amount of analysis of what it is to be a normal human being, and the people that made claims about artificial intelligence did not do that, so they thought it was going to be much easier than in fact it turned out to be. So the hard task is to know what the tasks are that you have to replicate on machines, and I think we have a huge way to go. Evolution produced a huge variety of capabilities which we share, we don't know what they are. Most animals do not have them. Some animals have very similar ones, but they're in the minority. I don't think it would be easy because there are lots of things that we don't understand, like enjoying a scene or finding something funny. There are many things that we are familiar with that are very hard to analyse, and it's hard to analyse what it is to find something funny. People have tried. I don't myself find any of those analysis that I've read totally satisfactory, and when we've analysed what it is, we may have a better idea what architecture can do it."

On the possibility that A.I. will have a sort of consciousness which is even better than ours:
"If we find out what it is that gives us the ability to think about our own thoughts, to be aware of our own experiences, we may well find that there are limits in the way we can do that, which have to do with, for instance, how much processing can go into one brain, that can fit into your body, that can feed itself and so on. There are constraints that evolution had to meet. Now if things are built in an artificial way, some of those constraints might be removed, and you might have distributed systems, each which has some of our abilities, but you link them together in new ways in which we can't link, and perhaps they will be new."

On the possibility of a culture with machines in it:
"In principle I think it could work, but whether we'll want it is another question. We might want to use them mainly when we don't want to go into dangerous environments, inspecting the bottoms of these oil pylons in the sea and so on, but there may be other reasons why we do want them. For instance, there are tasks that humans have to do which require strength, and patience, and intelligence as well, like looking after people who are very ill, and they may prefer to be looked after by robots rather than feeling that they're imposing on human beings, and they might start then developing relationships with them. This isn't my idea, some people believe this is one of the ethical tasks that A.I. should aim for."

On the future:
"In the short run, say in the next ten years, I think that one of the really important things that will happen is the development of A.I. in entertainment, and that's beginning to happen in computer games where they're trying to make the characters more entertaining, and other forms of computer entertainment. But I think we might also, more importantly have systems that we can play with, which are working models, which will help us get much deeper understanding of ourselves. For instance, I would like to see every psychology department teaching A.I. and having A.I. tool kits where the pupils can play with systems with simulated emotions and simulated perception, in order to get a deep understanding of perception and emotion, instead of just doing all these measurements and then running statistics packages, which is what happens now. So our educational practices will be greatly improved by doing A.I. It’s possible that our brains are too complicated to be understood by something as simple as our brains."

A microchip held between thumbs Copyrighted  image Icon Copyright: Production team

The Arguments: Amanda Sharkey
Amanda Sharkey Dr Amanda Sharkey is a lecturer in the Department of Computer Science at the University of Sheffield. She is director of the Artificial Intelligence course and Head of NRG research group. Her work involves applications of neural networks taking inspiration from how the brain works. She has a background in Psychology and is also interested in modelling biological systems and how that approach can be applied to A.I.

On why A.I. scientists haven't already far exceeded the capacity of brains:
"Well, maybe it's the way in which the brain is not perfect that accounts for some of our abilities. So that, for instance, our memories are not perfect - we don't remember everything, and that explains some of the things that we are able to do. And when you produce a perfect system that does whatever you tell it to, and stores every piece of information you've ever got, you can't produce the same kind of behaviour".

On the difference between a conventional computer and a neural network:
"With a neural network you've got no central set of instructions, and they're particularly useful where you have something where you don't have a rule to describe the relationship between the inputs that it gets so - say, the sensory information that it gets and the behaviour that you want. What you've got is a set of input units, and a set of output units and you've then got a weight between them which is analogous to the idea of the synapse between neurones in the brain. You then train the network, as in you've presented it with a set of inputs, you've told it what output you would like it to produce, so it adjusts the weights so that it does produce the right outputs. You can then give it a set of inputs that are similar, but not the same as the ones it saw. It will then produce an output that is like the output. So it's flexible in the way it operates."

On whether machines can have the same kind of capabilities as humans:
"I think that you could model what you think it is to be intelligent of conscious, and you'll know more about it as a result, and you can check out your explanation and make sure it works, because you've built a model. But you're still only modelling it and I don't think you've done anything like actually creating something that's really having experience of the world."

On the possibility that A.I. will have a sort of consciousness which is even better than ours:
"I think because of something about how conscious evolved, you're not going to be able to create anything like it. Because you can use evolutionary mechanisms to create something that you could say was similar, but you're using them towards a goal, you're indirectly programming the thing to do it. So you're rather back to the original symbolic A.I.

If you look at what we have achieved, at what systems we have that seem to be intelligent, we don't have systems then really. You could model an aspect of intelligence, but we don't have anything that is a whole intelligent system. And my hunch is that it is in principle impossible to go further. If you create an artificial system its not integrated with the environment and actually motivated by itself. You've still got an external person who is making it do that."

On the importance of evolution in A.I.:
"You can use evolutionary techniques. It's interesting to see what kind of behaviour you can evolve with a simple reactive mechanism, but then the hardware isn't evolving - it's abstraction of the real thing."

On the future:
"I think that we'll certainly end up with robots doing simple tasks for us where you don't need to worry about whether they're intelligent or not, for example, vacuuming the floor. Or similarly using neural networks, you could perhaps have a neural network that made your car go at the appropriate speed or not bump into the car in front. So I think we will certainly see artificial intelligence used in our daily environment in that kind of way."

Further Reading
Intelligent Systems for Engineers and Scientists (second edition) by Adrian A. Hopgood, published by CRC Press (2000), ISBN: 0849304563.
Part of the OU course, Artificial Intelligence for Technology (T396), is based on this book.

Professor Aaron Sloman's out of print book is now online - The Computer Revolution in Philosophy: Philosophy, science and models of mind

Combining Artificial Neural Nets: Ensemble and Modular Multinet Systems, Amanda Sharkey (Ed), Springer-Verlag, London (1999), ISBN 1-85233-004-X.

Computing Machinery and Intelligence, Alan Turing, Mind, 1950.

Brainchildren: Essays on Designing Minds, Daniel Dennett, MIT Press, 1998.

Defending AI Research : A Collection of Essays and Reviews, John McCarthy, CSLI Publications 1997.

Jargon Buster

The science and engineering of making intelligent machines, especially intelligent computer programs.
The mental activity by which an individual is aware of and knows about his or her environment, including such processes as perceiving, remembering, reasoning, judging and problem solving.
Computer Vision/Machine Vision
The combination of electronic eyes and brains allowing a system to see an object, make some decisions about that object and, in certain cases, implement an action based on those decisions.
Expert Systems
Systems that use human knowledge to solve problems that normally would require human intelligence.
General Problem Solver
A theory of human problem solving stated in the form of a simulation program.
Geometry Theorem Prover
A computer tool for proving elementary theorems in geometry.
Knowledge Representation
The study of how knowledge about the world can be represented and what kinds of reasoning can be done with that knowledge.
Language Analysis
Computer system that will analyse, understand, and generate natural human-languages.
List Processing Language, a programming language generally regarded as the language for A.I., ideal for representing knowledge from which inferences are to be drawn.
Logic Theorist
The first expert system, used to prove mathematical theorems.
Neural Networks
An information processing technique based on the way biological nervous systems, such as the brain, process information.
A device that can move and react to sensory input.
The study and technology of robots.
Symbolic Automatic INTegrator, an early problem solving program.
The finding of a path from a start state to a goal state.

The BBC and the Open University are not responsible for the content of external websites







Related content (tags)

Copyright information

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?