Skip to content
Skip to main content

About this free course

Share this free course

An introduction to artificial intelligence
An introduction to artificial intelligence

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

1 If AI is possible, how should we prepare for it?

In this section you will explore the idea that we should accept that AI technology (as described in the Introduction) is possible, and that we should prepare for the use of this technology in a way that maximises the benefits while avoiding the dangers it could bring. Such concerns have been voiced by leading figures in the field of AI, in particular, Stuart Russell.

Described image
Figure 2

Russell co-authored with Peter Norvig perhaps the most widely used textbook in AI, and both have done pioneering work in the field. More recently, Russell has called for a rethink in how AI is being adopted. Russell asks us to suppose for a moment, that we did in fact succeed in building AI that had sufficient intelligence to carry out a wide range of tasks normally undertaken by humans. Two questions follow from this:

  • What are the potential benefits and dangers of such a scenario?
  • How should we prepare for this?

The next activity examines Russell’s ideas in more detail.

Activity 1

Timing: 15 minutes

Part 1

Let’s start by watching Stuart Russell’s proposal in this video.

Download this video clip.Video player: lg003_2022b_vwr005_320x176.mp4
Copy this transcript to the clipboard
Print this transcript
Show transcript|Hide transcript
 
Interactive feature not available in single page view (see it in standard view).

While watching this video, think about Russell’s opinion of the progress of recent AI technology.

Afterwards, click on ‘Save and Reveal Discussion’.

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

Russell’s view on the continuing progress of AI is optimistic, in that Russell is quite confident that AI will one day achieve human level performance in a range of areas. Russell points out that to some extent, it already has, albeit in quite limited domains such as highly constrained game worlds, including examples of IBM’s Watson beating champions of the Jeopardy! game show, or DeepMind’s AlphaGo beating human world champions of Go. Russell’s main point here is that, given these achievements, and the expectation of further advances, it is prudent to consider how to make AI safer, if we are to properly prepare for when (not if) human-level performing AI systems are deployed throughout the world.

Part 2

Do you agree with Russell’s main argument that rather than wondering if AI will reach human-level performance, the safest course of action is to simply accept this will happen? If we accept this eventuality, should we begin now to work out how to best deal with this eventuality?

After you have had a chance to come up with your own answers to these questions, click on ‘Save and Reveal Discussion’.

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

Russell’s position can be queried on a number of fronts. The most obvious question is whether Russell is being overly optimistic about the capability of AI. Those supporting a position such as Russell’s might argue, however, that this is missing the point he is making, and that it is simply sensible to try to avoid a scenario that includes AI technology that is both powerful and dangerous. With the right measure of risk, and ways of mitigating this, we can have a better outcome of powerful AI that is far less harmful.

However, there is another possible response to Russell’s suggestions, which would mean taking them to a natural, yet sensible conclusion: if we cannot guarantee the safety of the technologies being considered, then perhaps we should not pursue them until their safety has improved.

A key idea from the discussion so far in this section, is that the social and political context within which AI technology is used, may in fact lead to such technology exacerbating existing biases and inequities, often having greater impact on more marginalised members of society. Recall from Week 1, that the use of the COMPAS algorithm in the U.S.A. has led to increased inequality in the justice system, due to existing historical and social disparities in this system.

Activity 2

Timing: 30 minutes

It is worth balancing Russell’s perceived optimism about the capability for AI to approximate human-level performance with consideration of the limitations of AI.

The following talk by Janelle Shane presents some very sobering examples that highlight the fact that there is still quite a way to go before AI systems reach human-level performance in general domains (i.e. outside specialised and constrained domains such as games).

Watch this video, and answer the following questions. Use the response boxes to record your answers.

Download this video clip.Video player: lg003_2022b_vwr006_320x176.mp4
Copy this transcript to the clipboard
Print this transcript
Show transcript|Hide transcript
 
Interactive feature not available in single page view (see it in standard view).

Once you have added your response, click on ‘Save and Reveal Discussion’.

Part 1

Regarding the ice-cream flavours algorithm: What does it mean to say the AI involved ‘is not nearly smart enough’?

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

The ice-cream flavours algorithm did not understand the meaning of the labels for flavours, and so it was unaware of how the resulting combinations of flavours would taste, and therefore whether humans would like them.

Part 2

Is it enough that an AI does what we ask it to do? What else do we need to make sure of?

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

An AI may be able to do what it is asked, but because it can’t fully understand what we are asking it, it will not likely do what we want it to do. In the example of AI being asked to assemble a robot, Shane makes the important point that there is a real danger that AI can reach goals we set it, while not actually achieve the outcome we wanted. Shane points out that when formulating a problem for an AI to solve, we need to set the limits of the problem very carefully (think about the example of the AI designing a robot to walk over gaps in a simulated environment, and how the designer needed to carefully specify the limits on how big the robot could be).

Part 3

More seriously, just over halfway through the talk, Shane also talks about a fatal accident in 2016 involving an autopilot AI in a Tesla car. How was the failure in this case a matter of the AI doing what it was asked, but not what was wanted?

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

The Tesla autopilot AI was being used on city streets, rather than on the highway for which it was designed, and it failed to recognise a truck driving across the car’s path. The failure seemed to be due to the AI autopilot being presented with a side view of the truck, whereas for normal highway driving trucks are seen from behind. Apparently, the AI autopilot mis-recognised the side view of the truck, including the gap underneath the truck between its front and rear wheels, as an overhead street sign, and tried to drive underneath the truck. In this case, it was doing what it had been designed to do, i.e. drive underneath street signs, but it had failed to understand the difference between a sign and a truck.

Part 4

Another example which is relevant to this course is the case of AI technology making discriminatory decisions about the resumes of people applying for jobs at Amazon. Specifically, this technology had learned to discriminate against candidates who were identified as women. Again, how was the failure in this case a matter of the AI doing what it was asked, but not what was wanted?

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

The Amazon resume reviewing system was trained on a biased training set, which was a historical dataset from Amazon that had a much greater representation of men than women in it. Using this dataset, the AI learned to discriminate against women, rather than hire the best candidate for the job regardless of identified gender. Note that this is an obvious example of where AI technology working within a context of past discriminatory practices can itself exacerbate such discrimination.

The rest of this week will proceed under the assumption that ‘AI is possible’. Given this assumption, it is natural to ask what sorts of issues and challenges with AI technology do we need to be aware of? To this end, in the remaining sections of this week, you will work through a range of information and activities addressing the following series of questions:

  • What about diversity, fairness, and equitability in AI?
  • What about sustainability in AI?
  • What about the use of AI for social good?