Skip to content
Skip to main content

About this free course

Share this free course

AI matters
AI matters

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

Glossary

Artificial Neural Networks
Artificial Neural Networks (ANNs) were developed throughout the second half of last century and have become perhaps the dominant AI approach for developing technology potentially capable of carrying out general learning. ANNs are now used across a range of industrial sectors, from retail and manufacturing, to healthcare and professional services (legal, finance, insurance and accounting). Much of the work prior to this century focused on solving fundamental challenges around how to build effective and efficient applications using this technology, not least of which were severe limitations in hardware. Work on ANNs typically has distinct training and deployment phases. A typical modelling task is to learn the relationship between data points (e.g. text, audio, images) and their labels (e.g. sentiment label for text, a caption for an image), and during training an ANN attempts to learn this relationship, which is then captured in a trained model. Given as input a data point that it has not yet seen (of the kind it has been trained on), a model that has been adequately trained, should be able to output a ‘correct’ label for this data point. Note that the training situation described here is so-called Supervised Machine Learning (see above the entry on Machine Learning). This century has seen a number of innovations that have accelerated the use of ANNs. For example, so-called ‘Deep Learning’ (DL) has enabled the building of many-layered neural networks, able to model far more complex problems more efficiently; although it should be noted that there are still high computing demands in training and deploying DL models. An example in the area of hardware innovation is the Graphics Processing Unit or GPU, which has increased the speed and efficiency of computer systems, and the availability of such technology has dramatically increased over the last decade. Consequently, while large-scale DL modelling requires very large and expensive computing systems, it is also possible for smaller scale DL modelling to be carried out on single laptops equipped with an appropriate GPU. However, a number of issues have emerged around using ANNs, and particularly Deep Learning. First, DL is very expensive (as noted in the in main part of this section). Second, as already mentioned, DL models tend to be far larger and more complex than previous approaches within Machine Learning, making it more difficult to understand how they operate (e.g. why the model labels a particular data point in the way that it does). Indeed, DL models are often referred to as ‘black boxes’, in that how they arrive at the decisions they do (e.g. labelling a particular image with a caption) is often quite opaque and difficult to explain. We will come back to the issue of explainability, and how this relates to ethics, throughout this course.
Machine learning
Getting computers to either replicate or support complex tasks typically carried out by humans has been a long-standing ambition within Computer Science and related areas. For example, the automotive industry has benefited enormously from assembly lines of robots, with each robot dedicated to a specific task. Another approach is to build systems with a more general capability for different tasks: a system with the ability to learn. An example here would be a robot in an assisted living environment that can be trained to carry out a number of quite diverse tasks within the home. Machine Learning (ML) refers to an area of research devoted to working out how to effectively and efficiently build systems which are capable of learning in the more general sense. One fairly broad distinction within ML is the one between Supervised ML versus Unsupervised ML. Supervised approaches incorporate techniques for learning from examples that have been labelled by experts, whereas Unsupervised approaches try to work out a sensible structure to the space of examples in the absence of labels (in a way very similar to the kind of taxonomical organisations worked out within biology). Another major distinction in ML is between approaches that exploit what we know about the problem ‘space’ (such as the classes to which objects in this space can be assigned, as when labelling pictures of cats vs. dogs) versus approaches that explore the problem space. Whereas Supervised learning is more about the former, so-called Reinforcement Learning (RL) is more about the latter. RL involves an agent exploring a relatively constrained environment in order to work out how to optimally achieve some pre-defined goal (e.g. winning a video game) and a key aspect of this approach is that RL agents are rewarded for the actions that lead to the successful achievement of the goal. Note that the RL approach has interesting similarities to how animals – including humans – learn, and a lot of effort is being put into working out how to make this approach more effective.
Natural Language Processing
Natural Language Processing (NLP) is the use of computational techniques for automatically understanding as well as producing natural language. Areas within NLP are typically organised around specific tasks, and these tasks tend to have wide applicability to everyday uses of language, while also raising interesting research challenges. The following two examples are areas where moderate progress in research has led to uptake of such technology within various areas of industry (including online retail, healthcare, and a variety of customer facing technologies such as chatbots):
Question Answering (QA)
QA involves understanding and generating questions and answers in natural language (e.g. Jurafsky & Martin, 2009, Chapter 23).
Automatic Summarisation (AS)
AS is the field of NLP focused on automatically producing a summary of a text (e.g. Jurafsky & Martin, 2009, Chapter 23).