The standard view of this forum does not always work well with assistive technology. We also provide a simpler view, which still contains all features. Switch to simple view.
Your user profile image

Peter Macilwee Post 1

21 January 2026, 3:26 PM

AI in Education

Hi,

I tend to subjectively agree with the evidence presented in both articles but I would have to say Hodges and Ocak (2023) offer a more persuasive argument. They reason argument with comprehensive, reliable evidence for and against the integration of AI in education to support their point of inevitability. The guidance they cite is useful for us to think about legal and ethical frameworks.

Although I do agree with Mitchell-Yellin (2023) that AI will be counterproductive in terms of learner and societal health, it would be a bit like giving back the car for horse and carts. Increased well-being is never going to win over increased productivity in evolving modern societies, as sad as that is. They do have a compelling argument though and they could have cited evidence sources and increased the ‘face value’ of their article.

Peter

Your user profile image

Cath B Post 2 in reply to 1

25 January 2026, 6:48 PM

As an educator, it's hard to disentangle my own experience - which is that for significant numbers of students, use of Gen AI is undermining the acquiring of key subject skills and knowledge and more general valuable attributes such as problem-solving and persistence - from evaluating the arguments made.

Undoubtedly we can't put Gen AI back in its box but the societal shifts needed to use it constructively only, in a way that doesn't undermine human skill development and indeed integrity, will be substantial

Your user profile image

Hannah Kerry Louise Lane Post 3 in reply to 2

25 February 2026, 10:55 PM

This is also what I suspect. No cogitating, no learning.

Your user profile image

Hannah Kerry Louise Lane Post 4 in reply to 1

25 February 2026, 10:58 PM

I thik their own practice was their evidence, but the second article particularly had lots of opinion stated as fact I thought.

Your user profile image

Holland Morrell Post 5 in reply to 1

27 February 2026, 1:23 PM

While Mitchell-Yellin uses persuasive language and certainly gives food for thought, I don't believe the main arguments are backed up by evidence. It's an opinion piece. While I share some of Mitchell-Yelin's concerns, I disagree with his point about ChatGPT being used to replace human interaction. That's an assumption. When I consult with AI for an extra opinion, I'm doing that versus working alone - not versus consulting a human. Evidence is needed before the assumption can be claimed as truth.

I do agree with the idea that using AI means you aren't practising your skills and therefore may lose them over time. However, I don't believe this is a solid argument for never using AI.

Hodges and Ocak provide more references and I believe they do a better job (not a perfect job) of backing up their claims with evidence. All three authors are university professors, so we can reasonably expect that they know their stuff. However, Hodges and Ocak both specialise in tech. Does this mean we put more stock in their opinions because of their expertise, or less because they may be biased in favour of AI?

Neither article convinces me, although Hodges and Ocak's views are closer to my own. My position on the subject is open to change as it evolves and new evidence comes to light.

Your user profile image

Jonela Carmada Marisa Wilson-Edjoukou Post 6 in reply to 5

2 March 2026, 8:22 AM Edited by the author on 2 March 2026, 8:35 AM

I have read through the previous posts and I agree with the direction given by each of you on the discourse of Generative AI in education. 

For me, I think the article that was the most persuasive is the Hodges and Ocak (2023) article. As Holland mentioned in her post, the authors provided more references, presented evidence to support the integration of AI in education and also provided a deeper trend of thought. The information provided in the article allowed me to reflect on my own use of AI, along with how I can work towards using it in a more meaningful, ethical manner. 

I also appreciated the Mitchell-Yellin (2023) article because the author presented substantive opinions on the issue through a meaningful discourse. This discourse also created deep reflection in regards to my use of Generative AI in my daily life. The  points also stirred up within me a sense of fear and concern, especially in terms of choosing AI over human interaction. In my present context I am unable to naturally communicate with friends and family.

In that regard, I do tend to use the platform for advice, critic,  ideas and suggestions instead of reaching out to friends or family. I do fear that I may be using it to replace my usual conversations with friends and family about daily issues I may face and so I have vowed to reach out more to my social network, despite the differences in time zones.  However, I believe that the better approach to anything new is education and understanding, instead of instilling fear towards new possibilities. 

All in all, I think that both articles lacked empirical data that clearly followed the scientific procedure. In that regard, there is still crucial work to be done when evaluating the benefits and challenges of Generative AI  in education. However, I tend to support the integration of generative AI into education. I see the integration as fostering a richer educational experience, especially when teachers as well as students are taught how to use such devices meaningfully and ethically.

Your user profile image

Kirstie Willis Post 7 in reply to 6

2 March 2026, 5:30 PM

 

I think the second article is more persuasive.  

The first article is based on personal opinion and there is no evidence provided to support the authors claims. 

I disagree with claims that if you use generative AI to create personalized e-mails you are encouraging students based on a lie. When I have worked in an office word processing many letters and documents are personilised using macros and templates does that mean that if the recipient is encouraged by the content of that letter is that feeling based on a lie. I do not think that if a teacher uses generative AI to produce an email, they are still instucting AI with the message they wish to convey. If generative AI saves the teacher time by not having to construct and personilse the email that time could be better used to plan and evaluate lessons.   

I think not equpping the learners with these skills as the first article suggests is not practical. The second article provides evidence to support my views. The second article provides evidence to support my views. It discusses a publication  by UNESCO that provides guidelines for poilcy makers. 

The second article included a  the results of a poll by students. In which 83 persent of student beleive generative AI will profoundly change higher education in the next 3 the 5 years” and 65 percent belive “the use of generative AI in higher ed has more benefits than draewbacks”. The finding of this poll supports by opinion.  

I think generative AI should be used as a tool that aids learning and not a solution for all learning. We need to be equipping learners with skills that are transferable to their chosen profession once leaving education 

 

 The author of the second article uses his personal opinions to voice claims and he has not provided evidence to back up these claims.  

Your user profile image

Mick Wilkinson Post 8 in reply to 7

6 March 2026, 2:03 PM

The second paper provides a more balanced appraisal and provides reference to evidence used in support. The first is very much an opinion piece with little evidence to support claims being made.

In my context as a Uni academic, I agree that Unis must prepare students for an AI-filled world of work and has a duty to teach how such tools can be used and should be used (in context of their studies). I fear, however, that higher education will become more about teaching students how to use tools and less about teaching them to become independent and critical thinkers, able to seek, find, evaluate the quality/rigour of evidence and know how to apply the better quality evidence to improve and/or support their practice. While AI can be used to support the creative process, in my experience, it is more often used by weaker students to avoid thinking, sadly without realising that AI can't actually think.

Your user profile image

Chitra-Niss Asa Post 9 in reply to 8

10 March 2026, 10:59 AM

I agree Mick.

Although the second paper by Hodges and Ocak is more persuasive due to the inclusion of supporting perspectives from other authors, I also acknowledge and respect Mitchell Yellin’s (2023) viewpoint presented in the first paper. As an educator, I remain concerned that the increasing reliance on AI tools may weaken students’ critical thinking and creativity. Nevertheless, Hodges and Ocak (2023) make a compelling argument that AI is now an inevitable part of contemporary society. Therefore, higher education institutions must develop clear policies and guidelines to ensure the responsible and ethical use of AI in teaching and learning.

Your user profile image

Alison Hines Post 10 in reply to 9

14 March 2026, 12:33 PM

I find the second article more persuasive however, neither article presents empirical research evidence, and both primarily offer interpretations and opinion. Their 'evidence' at best is anecdotal or based on secondary sources. They are offer their perspective rather than being evidence-based practice.