User:Jackkoumi

From WikiEducator
Jump to: navigation, search

Wiki colleagues.This is my first publication for WikiEducator. Originally I pasted a paper that I published in 2005 in the electronic journal, EURODL. I believe that this paper is the first published framework of micro-level screenwriting design principles for interactive multimedia. It concentrates on optimal integration of visuals and audio.

However, I have deleted all but the first couple of pages, pending my getting time to format the paper (but see link below if you want to read the full paper


Pedagogic design guidelines for multimedia materials: a mismatch between intuitive practitioners and experimental researchers
Jack Koumi, Educational Media Production Training

Abstract
This paper argues that pedagogic efficacy for multimedia packages cannot be achieved by experimental or by summative research in the absence of a comprehensive pedagogical screenwriting framework. Following a summary of relevant literature, such a framework is offered, consisting of micro-level design and development guidelines. These guidelines concentrate on achieving pedagogic synergy between audio commentary and visual elements. The framework is grounded in the author’s experience of producing multimedia packages at the UK Open University.

Introduction
This paper offers micro-level design guidelines for pedagogic harmony between sound and images in multimedia packages. These guidelines are compared with design recommendations in the research literature. It is argued that such recommendations have minimal value for practitioners because they address the macro-level. To be of practical use, the research needs to derive from micro-level design principles that are tacitly employed by practitioners.

Van Merriënboer (2001) notes that little is known about the optimal combination of audio or speech, screen texts, and illustrations in pictures or video.

In fact, some substantial papers do exist, written by educational technologists such as Laurillard and Taylor at the UK Open University. These address several detailed design techniques, which appear below, but mainly they discuss over-arching, macro-level questions, such as how learners might cope without a fixed linear narrative (Laurillard, 1998; Laurillard et al, 2000) and an analytical framework for describing multimedia learning systems (Taylor, Sumner and Law, 1997).

However, for the practitioner who is trying to design a pedagogically effective package, the literature is of little help. There appears to be no published comprehensive framework of micro-level design principles for optimal integration of visuals and audio. This is despite the many investigations into the use of audio commentary in multimedia presentations. Some of these investigations are summarised below, exemplifying the mixed results in the comparison of screen text with audio commentary.

Following this summary, the major part of this paper presents a framework of design guidelines for multimedia packages. These guidelines are in the form of practicable, micro-level pedagogic design principles, such as

  • there are occasions when the audio commentary should come first, in order to prepare the viewer for the pictures, such as, In the next animation, concentrate on the arms of the spinning skater <ANIMATION STARTS WITH SKATER’S ARMS HELD WIDE, THEN PULLED IN>

The framework has been compiled from the practices of designers of multimedia packages at the UK Open University. It incorporates an abundance of practitioners’ knowledge regarding pedagogic design of audio commentary and graphic build-up. The width and depth of the framework offers a substantial basis for future investigations – a set of design guidelines that can generate fruitful hypotheses.

The literature relating visuals and audio commentary

Tabbers, Martens and Van Merriënboer (2001) report several recent studies by Moreno, Mayer, and others, in which multimedia presentations consisted of pictorial information and explanatory text. Many of these studies demonstrated the superiority of audio text (spoken commentary) over visual, on-screen text. In various experiments learners in the audio condition spent less time in subsequent problem solving, attained higher test scores and reported less mental effort. The investigators attributed these results to the modality effect. This presupposes dual coding, whereby auditory and visual inputs can be processed simultaneously in working memory, thereby leaving extra capacity for the learning process.

In their own study, Tabbers et al (ibid) presented diagrams plus audio commentary to one group, but to a second group they replaced the audio commentary with identical visual text, on screen for the same duration. They found that the audio group achieved higher learning scores. However, when two other groups spent as much time as they liked on the same materials, the superiority of the audio condition disappeared. The authors conclude that the purported modality effect of earlier studies might be accounted for in terms of lack of time rather than lack of memory resources. (Mind you, the students in the visual text condition had to spend longer on task to achieve their comparable scores, so the audio condition could still claim superior efficiency)

Others have found that addition of audio need not be beneficial to learning. Beccue, Vila and Whitley (2001) added an audio component to an existing multimedia package. The audio was a conversational version of a printed lab manual that college students could read in advance. The improvement in learning scores was not statistically significant. Many students suggested that the audio imposed a slower pace than they were used to. The authors theorized that the pace set by the audio might be helpful for slow learners and detrimental to fast learners

Kalyuga (2000) observed a similar effect, finding that novices performed better with a diagram plus audio than with a diagram-only format. However, the reverse was found for experienced learners.

In another experiment, Kalyuga(ibid) found that audio commentary did indeed result in better learning, but only when the identical visual text was absent. Specifically, a diagram was explained in three different ways: visual text, audio text, visual text presented simultaneously with identical audio text. The visual-plus-audio group achieved much lower scores than the audio-only.

Kalyuga’s interpretation of this result was that working memory was overloaded by the necessity to relate corresponding elements of visual and auditory content, thus interfering with learning. He concluded that the elimination of a redundant visual source of information was beneficial.

However, this interpretation should predict that elimination of a redundant audio source would also be beneficial, i.e. that the visual group would learn better than the visual plus audio group. In fact, the result was slightly in the opposite direction, which also meant that the audio only group learned much better than the visual-only group. Hence a more convincing explanation is a split attention effect. In the visual-only condition, students had to split visual attention between the diagram and the visual text. This imposes a greater cognitive load than the audio-only condition, in which students had only one thing to look at (the diagram) while listening simultaneously to a spoken description.

Moreno and Mayer (2000) presented an animation accompanied by either audio text or visual text also found a strong split-attention effect, which they express as a Split-Attention Principle:

Students learn better when the instructional material does not require them to split their attention between multiple sources of mutually referring information (in their experiment, the information in visual text referred to the information in the animated diagrams and vice-versa)



In a refinement of these experiments, Tabbers, Martens and Van Merriënboer (2000) compared two strategies for decreasing cognitive load of multimedia instructions: preventing split-attention (preventing visual search by adding visual cues) or presenting text as audio (replacing screen text with audio commentary). They found that students who received visual cues scored higher on reproduction tests. However, the modality effect was opposite to that expected, in that visual text resulted in higher scores than audio commentary.

The authors advanced some speculative reasons for this reversal of previous findings:

  • students reported expending significantly greater mental effort in the visual text condition. Whatever the reason for the greater effort (possibly that reading visual text is more learner-active than listening to audio text), it could have resulted in deeper processing and hence better learning.
  • students could choose to replay the audio text segments (and to re-read the visual text). However, in both conditions, it is likely that students had partly understood the texts on first listening (or first reading). Hence, in the visual text condition, students who re-read the text could skip any visual text that they had already understood, whereas in the audio condition, students who re-listened would be forced to process some redundant auditory information.

These are reasonable conjectures for superior learning in the visual text condition. A third likely reason (which does not conflict with the two conjectures) was the complexity of the task. Students studied how to design a blueprint for training in complex skills, based on Van Merriënboer’s Four Component Instructional Design model. The task is certainly complex. It necessitates self-paced, head-down, concentrated study of complicated diagrams and relationships (students were allowed an hour to work through the multimedia learning task). As argued by Koumi (1994), such tasks cannot easily be supported by audio commentary, because this is a time-based (transient) medium. Instead, what’s needed is a static (printed) set of guidelines that students can revisit repeatedly while they carry out intensive, self-paced study of the diagrams.

The above arguments may throw some light on the various conflicting results. However, there may be more fundamental reasons for the inconsistencies, as follows.

REMAINING PAPER TO BE FORMATTED AND PASTED - OR SEE THE ORIGINAL VERSION ONLINE AT

[1]