Skip to content
Skip to main content

About this free course

Share this free course

An introduction to artificial intelligence
An introduction to artificial intelligence

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

2.1 Risks and benefits

Activity 4

Timing: 20 minutes

This video [Tip: hold Ctrl and click a link to open it in a new tab. (Hide tip)] highlights the damage that can be done when the limitations of a specific AI technology are not properly considered, before the AI is deployed into situations where it can directly affect people’s lives. Please watch from 42 seconds to 7 minutes 2 seconds.

After you have finished watching the video, try to answer the following questions:

  1. What were the main consequences of how these AI systems were used by the Australian Government?
  2. How might these consequences have been avoided?
To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

  • Fully automating parts of the Australian welfare system which handled decisions about whether there had been an over-payment of welfare, led to 400,000 people being in debt to the Government. As activist Asher Wolf points out, many of these claims about over-payment and debts were in fact wrong.
  • In Australia, a person becomes eligible for welfare payments when their income falls below a certain amount. The parts of the welfare system which were fully automated made decisions about overpayment by working out whether a person had received the correct amount of welfare based on their income. Previously, the algorithm being used, called Online Compliance Intervention, was supervised by an employee of the Government, so that any discrepancies found between earnings and welfare payments could be checked by such a person. However, not only was this oversight removed when the decision became fully automated, but the decision itself became far less transparent since large amounts of information were being incorporated into the machine decision, without a means for someone who was appealing against their debt to scrutinise which information was involved. This obviously makes it much more difficult for people to work out if the claim about over-payment and debt is in fact wrong, which, as Asher Wolf points out, has happened in numerous cases.

Reflecting back on the above activity, note the role of activists in the robo-debt scandal, in particular the involvement of Asher Wolf, an organiser for the grassroots campaign #notmydebt. Indeed, there are a range of organisations currently responding to similar concerns about the risks and harms of AI, which you might well want to take a look at. For example, the #TechWontBuiltIt movement focuses on all aspects of the ethical use of AI (see their twitter site). However, not only grassroots activists, but also members of professional organisations representing the fields Computer Science and IT (e.g. The Association for Computing Machinery, one of the key global professional bodies representing these fields), have begun actively publishing and engaging with such important areas as activism within AI (e.g. Taylor et al. 2020) and sustainability and AI (e.g. Schwartz et al. 2020).

Activity 5

Timing: 20 minutes

From the cases you have considered so far in this week, clear evidence is mounting that the impact AI and related technology is having on our lives is in fact deliberate: that, as shocking as it may seem, much of this technology has in fact been designed to have these consequences.

Take a look at this video that presents direct evidence of this, some of which comes from those directly involved in building the technology. Please watch from 8 minutes 28 seconds to 16 minutes 18 seconds.

Now that you have viewed this video, and have fresh in your mind ideas about how such technology as that behind social media has been deliberately designed to be addictive, ask yourself: ‘How much of the technology mentioned here do I use? Can I recognise some of my own experiences in what is being reported here?’ Add your thoughts to the box below.

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

To complete this section, let’s return to the award-winning documentary ‘All Hail the Algorithm’ by Al Jazeera, which gives a stark warning about a dystopian future we may well be facing if AI systems are not designed properly. In addition, this documentary provides an exemplary approach to analysing the social impact of such technology. For the final activity, you will examine a video excerpt relating to attempts to use AI technology to automate aspects of the U.S. justice system.

Activity 6

Timing: 15 minutes

For this activity, you will watch part of this Al Jazeera video in which Sharad Goel discusses the use of Correction Offender Management Profiling for Alternative Sanctions (COMPAS), which involves an algorithm being used in courts across the U.S.A. to assist with sentencing. Please watch from 10 minutes 47 seconds until 14 minutes and 46 seconds.

While watching the video, consider these questions:

  1. What is the ‘Risk Assessment Score’?
  2. How might such scores be calculated? For example, do you think they use data about age, gender, education or employment history, place of residence, past convictions, race? Do any of these seem better or worse as ways of predicting risk of committing crime?

It is important to note that you are not being asked to decide about challenging ethical dilemmas (although it is something you would be doing if you took the full version of this course in the OCLC). Rather, what you are being asked to do here is to reflect on your intuitions about such issues, and to begin building up a sense of how complex these issues are and where the challenges lie. So, for now try to respond as honestly and as simply as you can to these questions – if you like, use the following free response box to record your responses, it will likely be very interesting to reflect back on your answers to these questions as the course progresses.

Once you have added your responses, click on ‘Save and reveal discussion’.

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

  1. The so-called ‘Risk Assessment Score’ is in fact a number of scores, such as ‘Risk of Recidivism’, ‘Risk of Violence’ and ‘Risk of Failure to Appear’. Such scores have been used for many decades in the Justice system in the U.S.A., and various analyses have concluded that they are flawed (e.g. Harcourt 2015).
  2. The ProPublica article on COMPAS (Angwin et al., 2016) is commonly cited in this connection, that variables such as race, nationality and even skin colour have been used previously as input into such procedures for assessing the risk of recidivism etc. Even if such overtly discriminatory variables are not used, such algorithms reveal themselves to be nevertheless discriminatory on the basis of race or ethnicity, as pointed out in Harcourt (2015).

The material in this video touches on a general theme of this course. Consider the historical context of this case, and think about the impact of technology more generally within particular institutional settings – if an institution is somehow flawed, e.g. its normal operation drives racial and/or gender inequality, then do you think it is safe to introduce an automated system which would make this inequitable system more efficient? Another way of putting this question might be: do you think it is safe to design an efficiently inequitable system? Given the choice of deploying such technology with the result of a more efficient yet inequitable system, or delaying such deployment until inequitable outcomes can somehow be dealt with, it would seem the latter is by far the safer and more responsible course of action.