Philip Price Post 1
• 22 February 2025, 11:46 AM • Edited by the author on 22 February 2025, 11:47 AMA.I
Which article do you think is the most persuasive, and why?
I think both articles are equally persuasive and make good points about why you should and shouldn't use A.I. I agree that it can't just be ignored and shut away by education institutions and that it does bring about many benefits to both learners and educators. While it should be embraced, rules and regulations need to be followed about its use. I don't think institutions and educators have a full understanding of what it can do and how it can be utilised at the moment and there are too many grey areas around its use.
Is there anything obviously missing from either article? You may find it useful to think back to the discussion of evaluating research reports in the previous step.
In the first article arguing against, this seems to be more emotive and lacks any reference to research, analysis or views from other educators/learners.
Does either article convince you that the authors are correct? If so, which evidence convinced you?
I think both authors give good arguments in different ways. Both are equally persuasive. The first is more emotive and may play on the fear around technology and its impact on the world and job roles. The second uses research and the view of others to argue that its use is inevitable and institutions and educators need to prepare and adapt accordingly.
If you have already made a decision about the use of generative AI in your context, which evidence did you draw on when making that decision?
I'm 50/05 with a sway towards the use of AI is inevitable and it needs to be embraced.
Thank you Philip, it is an interesting discussion, and I agree facts need to be focussed on. I also found the first article emotive, and that 'put me off'. But also the logical arguments in the second resonated. We have used tools and technology forever, sometimes positively and sometimes for nefarious purposes. I already recognise its integration in the work I do, but not everyone has, and the increase in awareness of its inevitable use is welcome.
I agree with you, Philip, about the persuasive power of both the articles. The first one lacked empirical evidence yet, it resonated with personal experiences and hence, it was persuasive. The second one was convincing for all the research that it conducted. At the same time, I did not check and verify information from the references to understand if the article quoted/paraphrased information accurately.
The first relies on examples where the second uses research. The examples may be hypothetical or ones that the author has experienced. I could not find clues indicating any of them, so I presume the author describes generic situations. The second glosses over, "issues surrounding generative AI, such as ethical concerns, copyright and intellectual property questions, and biases within the training data", are they not an integral part of the what it is trying to endorse/advocate? Why does the article choose not to discuss them?
There is truth in how AI has penetrated our lives, but again, whether we want to use it or not is a choice that we make. If we want to join the AI rat-race, we are welcome to do so. We may choose Yellin's path. Alternatively, we could check how AI could help us avoid "deception" and personalize our content (I mean, create content such that it feels like I have done it?)
I haven't made a decision about AI in my context. My use of AI depends on my needs. For instance, I could have given both the articles to ChatGPT along with the questions and generated a well-written response for this forum. I am choosing not to do it.
Ethics of AI and references to support knowledge claims
I found Mitchell-Yellin's opinion piece to have a huge blind spot. To me, the single biggest concern in using AI to brain strom, write an abstract or prepare slides is not alienation from other people or alienation from one's own work, but ethics - given the factual inaccuracies that pervade text generated by AI, even when based on one's own work. As an academic I am not afraid of using IA generated text (or encouraging my student to use it) because of increased efficiency raising the bar of expectation, but because of the lack of control and in-depth knowing. Even if references are included as embedded links, I would still like to see what they are before I click on them - requiring one to just click (given phishing dangers, etc.) seems unprofessional.
The Hodges and Ocak (2023) piece was more convincing based on references to peer reviewed research, but lacked methodology, and evidence to back up most of the claims made in the article. I was most convinced by the point on assessment, because I have reached similar conclusions based on my own observations. As explained above (by Smruti) this should be verified by following up on the references cited for this point.
Michelle Hennelly Post 6 in reply to 1
• 26 February 2025, 4:07 PM • Edited by the author on 26 February 2025, 4:09 PMI agree with you Philip.
I thought the first article relied on sparking an emotive reaction, in agreement with the authors own opinions. the second was less bias and gave a more balanced argument based on empirical evidence.
Neither of the articles swayed my opinion either way. I prefer to make my own mind-up, after evaluating a technology for myself.
As it stands, I have tried using some AI Large language models (LLM) at the moment I am not particularly impressed, in-terms of my own personal uses. I do however feel AI has promise, but only if developed correctly and within ethical contexts. Currently AI does have its uses, particularly when used in administrative work.
Between the two articles, the one written by Hodges and Ocak was more persuasive to me as the tone was more factual and backed up by other resources when compared to Mitchell-Yellin's article, which to me was more of a 'thought piece'/personal opinion even if background material links have been provided.
Whilst both articles expressed and presented strong points of being for or against the use of AI, I couldn't help but notice that Mitchell-Yellin's piece lacked research findings and the quantitative data that could've made their position about AI use more convincing. This doesn't mean that I think their opinion about AI does not hold any value nor that it is totally incorrect compared to Hodges and Ocak's perspective but yes, if we're going to dissect Mitchell-Yellin's work, it lacked some elements that we should look for as discussed in evaluating research reports.
As Philip above has mentioned, AI still has a lot of grey areas. I have tried using AI myself prior to reading these articles and to be honest, it leaves much to be desired at this stage. That said, there is no denying that it is becoming more and more part of a lot of the online tools or apps we are using and I do see why some find it useful. It will be interesting to see its development in the coming years and I can only hope it will be utilised for helping us achieve a better society.
The first article by Mitchell-Yellin contains no references to supporting evidence. The second article by Hodges & Ocas has hallmarks of a persuasive research piece because it contains references to evidence to support its points. On scrutiny of the references, however, it's clear that most of them refer to other articles that don't have any references to supporting evidence either. For this reason, I don't find either piece persuasive and wouldn't use either to support an academic piece of work.
I think generative AI is a good technology that should be used properly and carefully. I've reached that conclusion by reading articles, listening to podcasts and participating in discussions.
I find both pieces to be thought pieces rather than pieces of empirical research. The article by Mitchell-Yellin didn't display any empirical research, didn't carry out any independent research that was reported in the article, and didn't have supporting references. There is a link from the word 'evidence' in the article, but this would require the reader to follow this through; why not just include some of the evidence in the article? The second article by Hodges and Ocak again to my mind was more of a thought piece; they didn't conduct their own independent research but used evidence and other literature to support their argument. However, at least this evidence and supporting literature was well referenced so the reader could follow their thought processes. With this in mind, I found article 2 to be more persuasive than article 1. This could be also related to my own thoughts around the subject of AI, and potentially my own internal bias to find articles supporting my thoughts to be more persuasive. Something to look out for when evaluating research reports perhaps.
AI does raise a certain mix of emotions, but I do feel that it is important to teach people how to use AI tools properly and acknowledge when they are being used; bringing into play the ethical considerations. AI is certainly making it's presence felt but it can also be hidden that people don't know when they are using it. I believe that AI should now be folded into the digital and information literacy skills so that individuals can make informed decisions about its use. The use of AI in terms of education will also be guided by its policy of use by the overarching educational institution - you may be quite happy for students to use it in your module, but the institution might have a different viewpoint.
Just to echo others' thoughts on the two pieces - the Mitchell-Yelling article was a very clear opinion piece with an appeal to the reader's own biases, whereas the Hodges and Ocak piece had listed supporting research supporting it.
The latter was also the most realistic article - AI is not going to go away, learners are going to take advantage of it and there's going to be an increasing expectation of employers for graduates to have an understanding on how to use this resource to it's full potential and that's regardless of type of industry.
How educators incorporate the teaching of AI - how it can be used effectively, how to critically approach the response prompts, where it is best not to use it - is going to be a significant challenge.
The article Integrating Generative AI into Higher Education: Considerations (Hodges and Ocak, 2023) captivated me the most because it questions the duality of AI on many college and university campuses. Also, it discusses the ethical integration of AI into education.
Jonathan Robert Donnelly Post 13 in reply to 1
• 3 March 2025, 10:42 PM • Edited by the author on 3 March 2025, 10:45 PMMy critique of Benjamin Mitchell-Yellin’s article, AI’s Efficiency Gains Come at the Cost of Alienation
Benjamin Mitchell-Yellin’s article, AI’s Efficiency Gains Come at the Cost of Alienation, argues AI boosts efficiency but isolates people. He claims tools like ChatGPT replace collaboration, reduce engagement, and create unrealistic expectations. While I see the concerns about workplace pressures and lost human connection, I think his perspective lacks imagination.
He assumes AI replaces rather than enhances interaction, but I see it differently. AI’s role depends on how we use it. Instead of reducing collaboration, it can refine it. In a MOOC with thousands of posts, I would love to see AI analyse discussions, identify themes, and highlight key insights. This wouldn’t replace engagement—it would help ensure valuable ideas aren’t lost in the noise.
I also think he ignores that AI is still in its early days. Dismissing it based on its current limitations overlooks its potential. AI isn’t inherently alienating—it depends on how we shape it, how we integrate it.
Charles Hodges and Ceren Ocak’s article, Integrating Generative AI into Higher Education: Considerations, takes a balanced approach, arguing that AI’s integration is inevitable and should be embraced. While I agree that frequent quizzes can improve learning I worry that students may find ways to bypass them, making academic integrity a concern.
I think that short online exams with AI-driven proctoring could be more effective, balancing flexibility with fairness. Camera monitoring and browser plugin could help detect and prevent cheating.
The discussion must go further. Institutions should explore short online exams or possibly interviews rather or come up with up completely new types of assessments.
Jonathan Robert Donnelly Post 15 in reply to 1
• 3 March 2025, 11:06 PM • Edited by the author on 3 March 2025, 11:07 PMAfter reviewing both articles, I find Charles Hodges and Ceren Ocak’s perspective more practical than Benjamin Mitchell-Yellin’s. While Mitchell-Yellin sees AI as isolating and diminishing human connection, Hodges and Ocak recognize its inevitability and focus on meaningful integration.
A key issue is assessment. Hodges and Ocak suggest frequent quizzes, but I believe these alone won’t ensure academic integrity. AI-driven proctoring with camera monitoring and browser plugins could help, as could interview-style assessments for deeper evaluation.
Beyond assessment, AI has the potential to enhance learning. In large MOOC forums, AI could identify key themes and highlight important discussions, helping students navigate the overwhelming flow of information—like a lighthouse cutting through the fog.
I agree
But the one thing i feel was left outof all the doom and gloom of AI taking over the world is it can be another inclusive tool used so others can be supported and make a level playing field for them. I am dyslexic so my grammer and preparation of essay will always be flawed. This could help me resolve some of these issues that others do not have.
That is one point i think has been left out support for inclusivity and people with disabilites in education.
Which article do you think is the most persuasive, and why?
I found the article, Why You Shouldn't Use ChatGPT most persuasive because it referred to the beliefs about, and experiences using this AI tool. For example, I feel far removed from the information provided to me following the task I have set the AI tool. It feels alien to me.
Is there anything obviously missing from either article? You may find it useful to think back to the discussion of evaluating research reports in the previous step.
Both articles read like blogpost. Regardless of that, both are missing the important elements of demonstrate the reliability of the research. The only difference is the second article included a clearly labelled recommendation section of references. The second article also includes in-text citations that can be verified.
Does either article convince you that the authors are correct? If so, which evidence convinced you?
The second article convinced me that the two authors are correct. This is because I am able further carry out checks on whether the authors are Professors at Georgia Southern University. The article also refers to a stock of references that I can corroborate.
If you have already made a decision about the use of generative AI in your context, which evidence did you draw on when making that decision?
When making my decision about the use of generative AI, despite finding the first article more convincing, the evidence I drew from came from two articles/blogposts mentioned in the second article. This is because it included a plethora of references.
