Evaluating an Existing Teaching Method

Before evaluating an existing teaching method it is necessary to define the term. An encompassing definition is a set of principles, procedures and strategies used by teachers which provide learning (Liu, Shi, 2007). Using this definition, Westwood (2008) explores a variety of approaches across conceptual approaches (e.g., constructivism, active-learning, direct instruction), directional approaches (lectures, mini-lectures, directed lessons), student-centred approaches (inquiry, discovery), and student types (young, gifted, students with disabilities etc). The purpose is not to select one or another as a universal approach but to determine suitability for purpose in a particular context.

Over the past decade I have taught mainly in a small class environment (15-30), gradually honing teaching methods to align with learning objectives, researching the applicability of the methods, reviewing students level of understanding prior to attending the classes, and reviewing the post-class comments and reviews. This is an ongoing and iterative process, as the technology changes, as my own knowledge of educational theory changes (and hopefully improves), as the learners who join the classes change. Of these, the most significant influence on the teaching method has been changes has been my own knowledge. The technological skills that I teach is mostly stable [1], the make-up of the students is surprisingly similar from year-to-year, and is the feedback.

It is best to describe the teaching method used across several different, but related, subjects as a "mixed strategy". Evaluation of the learners comes first, which feeds into the learning objectives; there is no point teaching people what they already competent at, and there is no point teaching people what is well-beyond their understanding. The principle of the "zone of proximal development" applies. Recognising that there is some lack of knowledge about the subject matter, theoretical grounding is provided, which differentiates between theoretical and practical knowledge. Structured elaboration of content (with plenty of cues, organisers, visual and written aids) is largely teacher-directed, but with explicit opportunities for "hands-on" activities and (due to class size) following the principle that computer-aided-learning should be interactive (e.g., Ramsden, 1992. p160-161), and significant opportunity for student-centred exploration of their particular problems and conceptual elaborations.

An evaluation of this mixed strategy has been carried out with a combination of learner reviews, colleague observation, and post-course evaluation of both history files (which record entered commands), class directories, and eventually system utilisation. Learner reviews are overwhelmingly positive, expressing considerable satisfaction with content and delivery and noting that they have appropriate acquired understanding that align with the course objectives. Colleague evaluation is somewhat more circumspect; it is noted that compared to other similar course which, for example, follow the "Software Carpentry" model, there is much more content, more theoretical grounding, but less hands-on learner-directed exploration. This comments somewhat match the review of the history files and class directories. Based on this review only a (sizeable) minority of the class do follow through all the proposed hands-on exercises that are delivered in the class. Often as guiding and example questions are provided to the learners (a sort of "spot assessment") there seems to be only a handful of participants who have understood the content.

Curiously however the concerns that arise from this in-class apparent lack of engagement and understanding is not reflected in the final outcomes, that is, actual system utilisation. There is strong correlation of exposure to the training course and system utilisation which has been reviewed on a cross-system and institutional basis in the past (Lafayette, 2015) and for which there is ongoing (albeit informal) evidence. This disparity between post-class learner feedback, in-class engagement and understanding, and final results initially seems quite perplexing, and a number of speculations can be offered (e.g., are the learners simply too polite in their feedback? are they embarrassed by the spot questions? etc). An alternative evaluation is to consider interpret the results at their face value. That is, the post-class feedback and results are actually genuine (certainly, the results cannot be faked), and the engagement is different to teacher expectations. That is, the learners are actually taking in a more than adequate amount of the material and are reviewing the content (as they are indeed thoroughly encouraged to do so) in their own time and with their own computational problems.

If this is the case - and it would certainly explain all the results - then a significant improvement that can be offered to the various training courses is more documentation for the learners to review in their own time. Whilst this is fairly substantial at the moment, it is acknowledged that each of the courses do have a great deal of in-class learning and, as has been mentioned previously, the issue of cognitive load (Sweller, 1988) is important. Whilst many of the existing teaching methods employed to generate the learning objectives and are orientated to the learner's capabilities and experience, the ability for the learner to follow up and elaborate in their own time (whether individually or in research groups) can be significantly expanded. It is not so much that the mixed teaching method requires significant change, but rather it needs to be elaborated to allow for the transition from instructor-led training to student-led exploration.

Endnotes

[1] The most significant change in the past decade has been the introduction of the Slurm Workload Manager as a scheduler and resource manager for high performance computing, which is conceptually similar to other scheduling systems (e.g., Portable Batch System), and the more widespread use of GPGPU programming, which is somewhat similar to multi-threaded programming (OpenACC) and message passing programming (CUDA).

References

Lafayette, L (2015) Software Tools Compared To User Education in High Performance Computing. Conference proceedings of the The Higher Education Agenda 2015

Liu, Q. X., & Shi, J. F. (2007). Analysis of language teaching approaches and methods: effectiveness and weakness. US-China Education Review, 4, 1, 69–71. ERIC online document ED497389

Ramsden, P., (1992). Learning to Teach in Higher Education. Rutledge

Sweller, J (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science. 12 (2): 257–285.

Westwood, P. (2008). What teachers need to know about Teaching methods. ACER Press