Innovation and accessibility

Are innovation and accessibility compatible?

2.6 Innovations in accessibility

Cartoon panels demonstrating 'multilingual sign language', 'synthetic animations of human gestures' and similar.

The six areas of the UEA's 'Virtual Humans' research 
© University of East Anglia. All Rights Reserved.

Innovations in online education are happening for many areas of accessibility. Some innovations will solely benefit disabled learners, while other changes have the potential to make online education more inclusive and engaging for all.

Promising areas of innovation in accessible online learning include:

  • Animated avatars that are able to perform sign language
  • Speech recognition and, specifically, digital personal assistants
  • Artificial intelligence (AI)
  • Virtual reality
  • Augmented reality
  • Telepresence robots

Animated signing avatars

Adding human-generated sign language videos to online resources can be costly and time-consuming. It can be difficult to find signers with expertise in signing technical language and if a resource is changed then the sign language video also needs to be edited. Animated signing avatars offer an alternative way of increasing accessibility for people who are deaf or hard of hearing.

The Virtual Humans team at the University of East Anglia have been developing animated avatars for some time. Their SignTel project applies this innovation to improving the accessibility of educational assessment. A virtual human character, or avatar signer, performs the gestures of British Sign Language signs to convey assessment questions in a manner more accessible to some deaf learners.

Related signing avatar initiatives include:

  • The eSIGN project, which uses avatars for virtual signing on local government websites in Germany, the Netherlands and the UK, but also has great potential for use in online education.
  • The LinguaSign project (Facebook group)which uses age-appropriate avatars performing virtual signing to teach foreign languages to deaf children.

Speech recognition and digital personal assistants

Speech recognition has been around for a long time, but has recently seen a revival of interest due to improved quality resulting from developments in artificial intelligence and in processing power, as well as the increasing popularity of personal digital assistants such as Siri and Alexa.

Voice controlled personal assistants such as Siri and Alexa are becoming more accessible, for example in recognizing computer-generated speech. Their potential for controlling aspects of the online environment is considerable. This, in turn, has implications for improving the accessibility of online learning.

While traditional methods of interacting with digital products, such as typing on a keyboard or using a touch screen, can be challenging for people with physical or cognitive disabilities, voice assistants provide a hands-free option for easier interaction with digital products. Voice assistants offer a much more natural and intuitive way of accessing information and completing tasks, which can benefit not only people with disabilities but also the general population. (Sanabria, 2023)

Artificial intelligence and machine learning

Machine learning, a component of artificial intelligence (AI), offers the ability to extract knowledge and patterns from a series of observations. Software that can understand images, sounds and language is being used to help disabled people to better interact with the world around them, including in educational contexts.

Educators have long used YouTube’s speech-to-text software to automatically caption speech in videos. Now that software can indicate applause, laughter and music in the captions. The Content Clarifier tool developed by researchers at IBM was designed to help people with cognitive or intellectual disabilities by simplifying, summarising or enhancing digital content. For example, figures of speech such as ‘raining cats and dogs’ can be replaced with plainer terms, and complex sentences can be broken down into simpler language.

Virtual reality

VR, virtual worlds and gaming feature in the initiatives described by Politis and colleagues’ 2017 paper People with Disabilities Leading the Design of Serious Games and Virtual Worlds, which details examples of the collaborative design of serious games with people who have intellectual disabilities or autistic spectrum disorder. Immersive gameplay is being used to develop employability and transferrable skills for people with autism. The paper explains that ‘autistic people have a strong preference for online interaction, which may be due to the familiar structures of online communication, the ability to choose conversation topics and/or the option of responding in one’s own time’. Virtual reality environments are also being used to teach communication skills to people with autism, again outlined in the paper.

Covering similar ground, the In Your Eyes immersive virtual reality game Freina, Bottino and Tavella (2016) was used to develop spatial perspective taking (SPT) skills in young adults with mild cognitive impairments. (The term ‘spatial perspective taking’ denotes ‘the extent to which a perceiver can access spatial information relative to a viewpoint different from the perceiver’s egocentric viewpoint (e.g. something on my right is on the left of someone facing me)’ (Clinton, Magliano and Skowronski, 2017))

In light of the revived interest in virtual worlds, research is also ongoing into ways of making them more accessible. A review of Todd, Pater and Baker’s book chapter ‘(In)Accessible Learning in Virtual Worlds’ details developments in this area, including a virtual guide dog.

Augmented reality

Augmented reality (AR) devices have been trialled in multiple learning scenarios as Blattgerste and his colleagues (2019) showed in a review of research literature in this area. Possibilities include the use of an AR platform with touch recognition to help young people with learning disabilities to understand mathematics; an AR application for handheld devices used to develop understanding of geometry; the combination of AR and emotion detection to help autistic children to learn social communication skills; and use of a handheld AR app to help young adults with cognitive disabilities to navigate a university campus.

Telepresence robots

A telepresence robot is a remote-controlled wheeled device that is connected to the Internet. The robot’s operator can see and hear what is going on around the device, and can in turn be seen and heard by people near the robot via a tablet screen. These robots enable chronically ill students to access classrooms and experience face-to-face instruction with classmates. Research has shown that these devices provide a positive experience of both educational and social development (Page et al, 2021).

© The Open University