Implementing a Modern Hybrid Geology Curriculum: A Case Study from a South African University
Abstract. Contemporary geology education is increasingly required to advance adaptability, intellectual agility, and professional competence in response to 21st century societal and industry needs. Whilst quality assurance and accountability frameworks underscore employability, limited clarity remains regarding the alignment between university geology curricula and the evolving societal and industry demands. This study examined the implementation of a geology curriculum at a recently established university in the Northern Cape Province of South Africa using a hybrid learning model. The model incorporated conventional face to face interactive discussions with pre-recorded online learning materials leveraging smart technologies to support authentic, personalised learning. The implementation efficacy was evaluated through solicited comments from external examiners, peer reviews, institutional and industry experts, and by using students’ performance and feedback. Students’ learning was evaluated through moderated online and in person theoretical assessments, complemented by field and laboratory-based practicals, and ultimately looking at pass rates. In contrast, students’ feedback was collected anonymously using a suggestion box and by following a standardised institutional questionnaire designed for quality promotion and assurance evaluation, administered at the end of 2025. Statistical triangulation across the diverse data sources show that the hybrid delivery model can enhance students’ theoretical comprehension, practical competencies, preparedness for professional practice, and sustainable societal involvement. The study contributes empirical evidence from a resource constrained and under researched context, demonstrating how aligned hybrid curriculum design can strengthen teaching, learning, and assessment practices in geology education. These insights inform ongoing debates on curriculum innovation, quality assurance, and industry relevance in higher education.
I think the authors are approaching a strong problem. Undergraduate geoscience education is the place where most people get introduced to geology, so putting our best foot forward here is important. That said, I think the manuscript requires major revisions prior to being published. Most of this critique will lie with the methodology.
1. The authors claim to use qualitative methods, at least in part, for the data collection.
2. The authors claim an innovative instructional design to help students learn geology.
3. The authors are testing the efficacy of the innovative instructional design.
First, the authors claim to be using qualitative methods for at least part of the data collection. This is not a qualitative investigation. The authors may have used some qualitative (open-ended) data, but only to create bins for completing some descriptive statistics (how many said this or said that). I saw no interpretation of the qualitative data. I don't even know how much was collected. Was it just survey data, or were there interviews? Qualitative research is mainly about investigating how theparticipants make meaning. This requires a LOT of data (writing, interviews, conversations, etc.) to try and understand how the participants are developing their understanding of the content material (in this case). Qualitative research is NOT about whether an instructional intervention was effective or not and how much. It would be more about how the participants experience this instruction and how they learn from it. Qualitative research is not supposed to be used for generalizing to larger populations. IT is about the individual and hearing the voice of the participants or observing the thinking of the participants.
Second, the authors are exploring the efficacy of a novel instructional structure. This is fine, but they don't describe (AT ALL) the design of the instruction. What is novel about the instruction? Why is it considered a hybrid? What parts are hybrid? Why those parts? What content material was presented? Who were the students? Were they majors? Did they have any experience? What were the assessments of the students? How did the authors use them to come to their conclusions? How long was the intervention? Did they spend every day or every other day? How many field excursions? How were they different from any regular geoscience course? How did AI play a role? Was the remote part of the hybrid aspect synchronous or asynchronous? Basically, what was the design of the course? What effective teaching strategies did instructors use? Why those particular ones? Was there more traditional instruction? What parts? Why those parts? It is hard for the reader to understand what the participants are going through.
Third, this is an investigation into the efficacy of an innovative instructional design. However, if the authors do not know where the participants' knowledge was in the beginning (I did not see any pre-intervention measurements), how can they say anything about the effects of students' participation in the instructional intervention? Efficacy should be a measure of growth. This manuscript does not show growth, only the endpoint. And with an n of 13, making any kind of generalization is dubious. I mean, there are some statistical methods (and I am not a statistician) that can be useful in showing some difference between pre- and post-intervention. Mainly, what I saw was participant self-report (Did you like the instruction? Were you engaged? etc.). While these kinds of metrics are important, they are not measurements of the efficacy of instruction. For this, you would need to assess what they understand after instruction and how they understand it. The authors talk about how there is alignment among the goals, the content material, the assessments, and the teaching strategies, yet this is never described or displayed. The reader must just take the authors' words for it. I am sure the authors are honorable, but as a reader, I like to see some empirical evidence so I can also make some judgments about the success of the project, to check my interpretation of f some of the data against the interpretation of the authors. This will give me, the reader, more confidence in the results. Oh, and since this is not qualitative research, the use of Lincoln and Guba (etc.) for the reliability assurances is not warranted. Even if it was warranted, I would shy away from Lincoln and Guba, anyway.
Lastly, there are a bunch of typographical errors, additional words, missing words, peculiar phrasing, incomplete sentences, etc. that also need tobe addressed. I am attaching a PDF of the manuscript with my comments added. This will show the exact locations of the issues I had with the paper. Thanks for the opportunity to review.