How do the presenters demonstrate that their findings are trustworthy?
In the episode I listened to, the speaker described the methods used in their research and the outcomes, They also directed the listener to the full case study published in the notes area.
I listened to https://www.centerforengagedlearning.org/measuring-what-learners-do-with-feedback/ because I was intrigued by the title of the study. I really want to know what learners do with the feedback they receive. The podcast focuses on how the instrument to study feedback literacy was developed and it directs the listener/reader to the published article.
I see that a lot of effort was put into developing the instrument with several stages of "feedback". However, either I misunderstood the podcast or had different expectations from it: I presumed the podcast would have a word or two about findings from the study and not only that the instrument was reviewed and revised.
On the trustworthiness of the findings, I am not sure what exactly the findings were. Was it that the instrument needed revision? If so, I think the repeated testing of the instrument did make a lot of sense.
Probably I must read the entire article to have a better perspective of this research.
Thank you for reading my post,
Smruti.
Katie Patricia Jones Post 4 in reply to 2
• 26 February 2025, 7:03 PM • Edited by the author on 26 February 2025, 7:20 PMHi Smruti
Whilst I haven't listened to that one. I would assume the re-testing was to test for the reliability of the instrument - what's known as test-retest. Basically, if you use the same instrument to test the same thing (time & time again) & you come up with the same or similar findings each time, you have test-retest reliability. Hope this helps.
I'm off to have a listen now to check.
Update: having read the transcript, it wasn't a retest. They got experts in to check their items were measuring what they intended to measure. The feedback meant they needed to revise these items (possibly due to overlapping items) until they all agreed that the instrument would be an accurate measure. Hope this makes it clearer than my original answer (posted without listening!).
Kind regards
Katie Jones
Measuring what learners do with feeback
The article is situated in the literature and based on empirical evidence from an instrument that is developed to determine feedback literacy. The instrument was developed and validated and then used on learners in four countries around the world. The article is described as a useful example of how to go about developing and validating an instrument. In addition, the feedback literacy instrument is published as an appendix.
Michelle Hennelly Post 3 in reply to 1
• 26 February 2025, 4:38 PM • Edited by the author on 26 February 2025, 4:38 PMI listened to Online Students and the First-Year Experience, https://www.centerforengagedlearning.org/online-students-and-the-first-year-experience/. It did exactly as it promised and gave a snapshot on how 3 online universities aid first year students with orientation, study tips and time management tools and a seminar course.
The second was Metacognitive Intervention and Student Success, https://www.centerforengagedlearning.org/metacognitive-intervention-and-student-success/. Focuses on interventions put in place for vulnerable students who may be struggling with their assessments within their first year. (academic probation) These snapshots should spark just enough interest to encourage the listener to access the full paper and study to find out more.
I also listened to the Metacognitive Intervention and Student Success & whilst they do mention two authors I'm not sure how you would access their article as there's no date or title mentioned. At the same time, it is probably accessible through the www.centerforengagedlearning.org website which is part of Elon University which should be using trustworthy sources. However, I listened to the podcasts on my phone.
For this activity, I listened to two of the podcasts. I also read the transcripts to further confirm my understanding of the material.
The first podcast I listened to was Episode 15 – Future-Oriented Feedback. The presenter identified the knowledge claims made by the researchers about providing future-oriented feedback/feedforward straightaway. There was a part in the podcast where the presenter discussed the summary and rationale of the research along with how the study was conducted. The podcast concluded with the research study’s findings along with recommendations.
The second podcast I checked out was Episode 22 – Alternatives to Studying Abroad During the COVID-19 Pandemic. This one was about students in Asian countries who were exploring alternative options to studying abroad that higher education adapted during the Covid-19 pandemic. The researchers wanted to find out how the virtual formats for the programmes presented in the study affected students’ intercultural competency scores. The research was presented similarly to the podcast mentioned above. I did notice that both podcasts did not really present details in terms of related literature or supporting evidence which I suppose is due to the podcast being bite-sized. That said, listeners would still pretty much get the gist of both studies and they do provide links for those that want to do a deep dive.
Mentoring Undergraduate Research in Global Contexts
This article analyzed the literature on mentoring undergraduate research and compared the findings of this work for global contexts to the existing body of literature. Their first knowledge claim is that a separate framework is required for mentoring u-g research in global contexts vs. mentoring undergraduate research. Essentially the two are different. The survey evidence presented highlights the most important aspects of how to mentor u-g research in global contexts and the challenges.
I will look up this article and read it. I also think that it would be fun to submit an article to have it read and summarized, and to explore the many other 60-Second SoTL podcasts! What a wonderful resource!
Episode 47 – Measuring what Learners do with Feedback
Dawson, Phillip, Zi Yan, Anastasiya Lipnevich, Joanna Tai, David Boud, and Paige Mahoney. 2023. “Measuring What Learners Do in Feedback: The Feedback Literacy Behaviour Scale.” Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2023.2240983
Episode 44 – Shaping Student Study Strategies
Maurer, Trent W., and Emily Cabay. 2023. “Challenges of Shaping Student Study Strategies for Success: Replication and Extension.” Teaching and Learning Inquiry 11. https://doi.org/10.20343/teachlearninqu.11.18
In both - and a strength of this style of presenting/snapshot the presenters used the inspiration for the study as a bolster to the study and gave a breakdown of the methodology and reasoning. They also provide the names and authors for the listeners to check. I think the presenters do a good job of summarizing the source but it is a little sparse, and leaves a lot of wanting for more.
They do cover the methodologies particularly well and the results, which arguably are perhaps the most interesting to someone who wants to look into a study quickly, before deciding to invest the time properly.
I listened to the following podcasts:
Moore, J.L. (2025) 60-second SOTL Episode 34 Supporting SOTL through a Regional Community of Practice [Podcast]. 18 May 2023. Available at: Supporting SoTL through a Regional Community of Practice - Center for Engaged Learning. (Accessed: 1 March 2025).
Moore, J.L. (2025) 60-second SOTL Episode 18 Experiences of Students with Disabilities in Work-Integrated Learning. 19 January 2023. Available at: Experiences of Students with Disabilities in Work-Integrated Learning - Center for Engaged Learning. (Accessed: 1 March 2025).
In both cases, I was able to identify the summary of research, discussion of how the study was conducted, discussion of research findings, rationale and recommendations. Supporting evidence was provided as further links on the podcast page. In terms of trustworthiness, I suppose that the articles were published in open-access academic research journals, and the authors were named as part of the podcast, but it would be a case of going to the article and reading through everything as opposed to relying on a 3-4 minute summary podcast.
Kellieanne McMillan Post 13 in reply to 1
• 1 March 2025, 5:15 PM • Edited by the author on 1 March 2025, 5:17 PMThank you to everyone for their contributions. I found this task to be quite interesting. It's nice to learn from a different type of media (podcasts) and I'm glad to now know of the Center for Engaged Learning (being new to teaching!)
The podcasts I chose are:
Moore, J. L (2022) 60-Second SoTL – Episode 8 Challenges to Providing Effective Feedback [Podcast]. October 20 2022. Available at Challenges to Providing Effective Feedback - Center for Engaged Learning. (Accessed 1st March 2025).
Moore, J.L (2022) 60-Second SoTL – Episode 6 Amplifying Indigenous Voices in Work-Integrated Learning. [Podcast] October 6 2022. Available at Amplifying Indigenous Voices in Work-Integrated Learning - Center for Engaged Learning. (Accessed 1st March 2025).
Both podcasts described a published research study.
In terms of the features we were taught to look for when evaluating research papers, I could hear a summary of research, rationale for research, research methods, presentation of findings, implications, recommendations and conclusions. Neither podcast included anything about ethical considerations, but I don't think it detracted from the quality too much. A link to the research paper and additional supporting evidence to accompany the podcast was provided for both podcasts.
I am inclined to trust the information provided in these podcasts, however it would (of course) require a read through of the research paper to accurately ascertain their trustworthiness. On the plus side they do seem to provide a brief overview of the research paper, which is enormously helpful as a tool for selecting research papers that are relevant or of interest. I'm thinking that if they can indeed be trusted then brief podcasts are an excellent way of introducing researchers to specific research.
Metacognitive Intervention and Student Success
I listened to Metacognitive Intervention and Student Success: Available here: https://www.centerforengagedlearning.org/metacognitive-intervention-and-student-success/ The presenter has demonstrated that the finding are trustworthy by exhaustively narrating the material and methods used. The presenter has also referred the lister/reader to the source article.
When evaluating a research report, the following questions present proper guidance:
- Is the study’s research question relevant?
- Does the study add anything new to current knowledge and understanding?
- Does the study test a stated hypothesis?
- Is the design of the study appropriate to the research question?
- Do the study methods address key potential sources of bias?
- Were suitable ‘controls’ included in the study?
- Were the statistical analyses appropriate and applied correctly?
- Is there a clear statement of findings?
- Does the data support the authors’ conclusions?
- Are there any conflicts of interest or ethical concerns?
(Adopted from OU)
Most of these requisite steps of evaluating a research paper were captured in the presentation i listened to.
I listened to : Measuring What Learners Do With Feedback - Center for Engaged Learning one of the reasons i found this to be credible was the information was peer reviewed, and the study is published in the Assessment & Evaluation in HE journal. The authors are respected in their field too, with other publications. In addition to this the work was grounded in theory and had a rigorous methodology.
