Skip to content
Skip to main content

Large language models - the chatty computer

Updated Monday, 29 January 2024

Chat GPT is a large language-model based chatbot but is the information it gives accurate? This article looks at the limitations of this advanced open artificial intelligence.

Find out more about The Open University's Computing and IT courses and qualifications.

In late November 2022, the American artificial intelligence company OpenAI opened its ChatGPT chatbot to the public. ChatGPT caused a sensation – not only amongst artificial intelligence experts - but also amongst the wider public. It quickly became the fastest growing online service - ever.

Within hours of its release, newspapers, magazines and online sites were posting excited articles detailing their experiments with ChatGPT. This excitement was soon coloured by concerns that generative artificial intelligence programs, such as ChatGPT, could have devastating effects on whole sectors of the economy. Ranging from university teaching to the development of movie scripts; as well as wreaking havoc on social media and in political debates.

ChatGPT is built on top of OpenAI’s proprietary GPT (Generative Pre-trained Transformer) software; an example of a so-called Large Language Model (LLM). LLMs are built using neural nets trained on very large volumes of text. This text is comprised of material on the Internet; just one collection, Common Crawl, archives more than three billion Web pages, and many LLMs are trained on multiple data sets. During the training process, LLMs learn the statistical relationships between words in the training set. In the case of ChatGPT, this eventually amounted to some 175 billion relationships derived from 45 terabytes of text.

Given the enormous volumes of data being processed, most of this training process is completely automated; running for months on giant clusters of computers consuming colossal amounts of energy. However, even all this computer power is not enough; a good part of ChatGPT’s success lies in the use of humans.

Even after the computerised training process was complete, ChatGPT often produced rambling, incoherent or irrelevant results. People were brought in to fine-tune its responses. They taught ChatGPT how to answer certain types of queries, to structure its answers, to correct it when it was wrong and – crucially – to be safe. Huge efforts were made to teach ChatGPT what were acceptable answers and what not to say. Rather than answer some questions or to be asked to make a moral or ethical decision, the human trainers changed ChatGPT’s behaviour to avoid some issues entirely or to admit that as an AI, it could not make an informed decision.  

The intervention of human trainers created a LLM capable of generating helpful, relevant responses whilst largely avoiding offensive behaviours that bedevilled previous high-profile AI experiments such as Microsoft Tay. Having said which, a recent public trial of incorporating ChatGPT into Microsoft’s Bing search engine demonstrated further work is still needed before these technologies can be considered entirely ‘safe’ for a wide audience.

ChatGPT responds to prompts from users by using the relationships calculated during the training process to construct a linguistically accurate response. Whilst LLMs are sometimes said to use statistics to ‘predict the next word’ in their response; in actuality the process is much more complex with multiple phrases in a response being generated alongside one another. LLMs lack any ‘understanding’ of the meaning of their training data or the output they produce, which means that they can produce - interesting - results.


ChatGPT has no understanding of ‘fish’ or ‘chocolate’ or why they don’t combine into a delicious dessert; likewise, it has no idea what a cat is, or a dog, or a person, a pyramid, or indeed - anything. It just knows how to link words into plausible sentences.

ChatGPT sounds as confident at creating an inedible pudding as it is on any other subject. For this reason, Large Language Models have been described as ‘completely mindless' and ‘stochastic parrots' capable of producing large volumes of text that may be misleading, nonsensical or dangerously wrong – it might be worth skipping the dessert if you know an AI has been planning the menu!

The potential audience for LLMs such as ChatGPT is enormous. There is potential to incorporate LLMs in a huge variety of products and services; they will power intelligent assistants similar to Siri and Alexa, they will work as customer service agents on the Internet and telephone, they could be used to help diagnose illnesses. They could be used as infinite patient, teachers, and tutors; to explore huge data sets and summarise their contents or identify interesting trends otherwise lost in the volume of information. LLMs can translate between languages, mimic dead authors, overcome writer’s block, rephrase materials for new audiences – and a thousand other tasks. Soon, you will be using LLMs without even realising it; just as you engage with AI when you browse your social media feed; ‘improve’ a photograph or search for a piece of music.

Using LLMs can be fun. It is fascinating, if not a little unsettling, to see a machine answer complex questions about specialised topics in assured, grammatical English. This fluency and confidence, combined with the friendly, patient nature of ChatGPT persuades us we are dealing with a genuine intelligence – and hold on, doesn’t this sound awfully like those first reactions to ELIZA nearly 60 years ago?  

 

 

Become an OU student

Author

Ratings & Comments

Share this free course

Copyright information

Skip Rate and Review

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?