Skip to content
Skip to main content

Academic versus AI

Updated Wednesday, 31 January 2024

We wondered how easy it was to tell the difference between the work of a real-life human, and a machine… so you tell us!

We fed the same article brief - write a 250-word article, titled 'The risks versus the rewards of using AI in higher education in 2024' - into both ChatGPT and an Open University academic. We wanted to find out if it was easy to spot the difference between the work generated by a human - Mike Richards of the OU’s computing department - and that from a machine. 

Below are both articles. Read them and then return to our social media channels to tell us which you believe is human-written. Article A, or Article B? Good luck! 

Article A - The risks versus the rewards of using AI in higher education in 2024 

In 2024, the integration of artificial intelligence (AI) in higher education continues to be a topic of considerable debate, raising questions about the risks and rewards associated with its widespread use.

One of the key rewards lies in the potential enhancement of the learning experience. AI-driven tools can personalize education, adapting to individual learning styles and pacing. This tailoring fosters engagement and comprehension, allowing students to grasp complex concepts with greater ease. Additionally, AI can automate administrative tasks, freeing up educators to focus on more interactive and personalized teaching methods.

However, these rewards come hand in hand with risks that demand careful consideration. Concerns about data privacy and security persist as institutions amass vast amounts of student information. The misuse or mishandling of this data could have severe consequences, compromising student privacy and confidentiality.

Another risk involves the potential perpetuation of biases within AI algorithms. If the data used to train these systems contain inherent biases, AI applications in higher education might inadvertently exacerbate existing inequalities. For example, biased algorithms could impact admissions processes, exacerbating disparities in educational opportunities.

Moreover, there's the risk of over-reliance on AI, potentially diminishing the human touch crucial for effective teaching and mentorship. Striking the right balance between technology and human interaction is imperative to ensure a holistic and nurturing educational environment.

In navigating the evolving landscape of AI in higher education, institutions must carefully weigh these risks against the rewards, employing robust ethical frameworks and continuously reassessing their AI implementations to foster an inclusive, secure, and enriching learning environment for all.


Article B - The risks versus the rewards of using AI in higher education in 2024

This year, we’re going to see generative AI begin the transformation of higher education towards personalised learning where each student has a customised experience of learning that best supports their skills.

Generative AI can reword and rephrase teaching materials to suit individual students. It can simplify complex documents which can be intimidating, as well as summarise large volumes of text and data into shorter, more easily understood chunks. Generative AI can assist people learning in a different language - or indeed, help them learn a foreign language.

2024 will see the first AI ‘study buddies’; tireless, patient partners who can answer questions, come up with quizzes to test knowledge without judgement and help identify errors in solutions - at any time of the day or night.

These are still early days for AI in education. While many people have heard of AI, many fewer have used it and still more do not know how to use it in their studies. It will be necessary to teach people how to use AI appropriately and safely.

There are misconceptions about AI - not least that programs like ChatGPT ‘understand’ what they produce; in fact, they use statistics to create sentences that read like text written by a human being. AI can get things very wrong; it can make things up or misrepresent a topic but does it in such a way that it sounds entirely credible. We must retain our ability and willingness to question AI materials. Just because it sounds credible, doesn’t mean it is.

Guest users do not have permission to interact with activity or resource.

Think you know the answer? Registered OpenLearn users can vote in the poll above (log in to see it if you haven't done so already), or head to our Twitter channel to cast your vote in the live poll online.


Become an OU student

Ratings & Comments

Share this free course

Copyright information

Skip Rate and Review

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?