MULTIMEDIA-ENHANCED INSTRUCTION IN ONLINE LEARNING

MULTIMEDIA-ENHANCED INSTRUCTION
IN ONLINE LEARNING ENVIRONMENTS
by
Barbara Ann Schroeder
A dissertation
submitted in partial fulfillment
of the requirements for the degree of
Doctor of Education in Curriculum and Instruction
Boise State University
April, 2006
BOISE STATE UNIVERSITY GRADUATE COLLEGE
SUPERVISORY COMMITTEE FINAL READING APPROVAL
of a dissertation submitted by
Barbara Ann Schroeder
I have read this dissertation and have found it to be of satisfactory quality for a
doctoral degree. In addition, I have found that its format, citations, and bibliographic
style are consistent and acceptable, and its illustrative materials, including figures,
tables, and charts are in place.
_____________________________
Date
________________________________________
Carolyn Thorsen, Ph.D.
Chair, Supervisory Committee
I have read this dissertation and have found it to be of satisfactory quality for a
doctoral degree.
_____________________________
Date
_______________________________________
Richard Johnson, Ph.D.
Member, Supervisory Committee
I have read this dissertation and have found it to be of satisfactory quality for a
doctoral degree.
_____________________________
Date
_______________________________________
Lawrence Rogien, Ph.D.
Member, Supervisory Committee
I have read this dissertation and have found it to be of satisfactory quality for a
doctoral degree.
_____________________________
Date
_______________________________________
Chareen Snelson, Ed.D.
Member, Supervisory Committee
ii
BOISE STATE UNIVERSITY GRADUATE COLLEGE
COLLEGE OF EDUCATION
FINAL READING APPROVAL
To the Graduate Council of Boise State University:
I have read this dissertation of Barbara Ann Schroeder
in its final form and have found it to be of satisfactory quality for a doctoral degree.
Approved for the College of Education:
___________________________
Date
______________________________________
Diane Boothe, D. P. A.
Dean, College of Education
Approved for the Graduate Council:
___________________________
Date
______________________________________
John R. (Jack) Pelton, Ph.D.
Dean, Graduate College
iii
DEDICATION
This dissertation is dedicated to my parents and husband, Carl Beavers, who
have always believed in me. Thank you for the weekends you took the kids away,
Carl, and for the encouragement you have always given me, Mom and Dad. I am
forever grateful.
iv
ACKNOWLEDGMENTS
Sincere appreciation is given to Carolyn Thorsen, the Chair of my committee,
who has been my mentor and role-model during my years at Boise State. Also,
thanks to my committee members, Rich Johnson, Chareen Snelson, and Larry
Rogien, who took the time to read and help improve this dissertation.
v
ABSTRACT
With newly developing multimedia technologies, incorporating
simultaneous presentations of narration, images, and text, the possibilities for
instruction are vast. Yet, how and when should educators use these technologies in
the most effective ways to enhance learning? This is the driving question behind this
research investigating the effectiveness of multimedia-enhanced instruction in online
learning environments, one of the most rapidly expanding fields in education today.
The basis for the use of multimedia is the assumption that when users interact with
the various media technologies they learn more meaningfully (R. C. Clark & Mayer,
2003; R. E. Clark, 1983; Mayer, 2003). Multimedia learning principles, motivation
principles, transactional distance theory, dual channel theory, computer self-efficacy,
and visual/verbal learning preferences provide the theoretical bases for designing
and analyzing these instructional enhancements.
In this study, two different groups were examined: an experimental group
(MM) which interacted with multimedia-enhanced instruction and a control group
(No MM) which used a textbook for instruction. The research was conducted in an
educational setting, with the researcher examining other possible variables that
might affect student learning, such as learning styles in the visual/verbal range,
computer self-efficacy, and experience with database software. It was the intent of
the researcher to find out if a more dynamic form of multimedia instruction might
improve learning outcomes when compared to a static, textbook-based format.
vi
Although learning outcomes were no better for the experimental than the
control group, each group had statistically significant increases in test scores, which
confirms Mayer's (2003) multimedia principle which states that carefully chosen
words and pictures can enhance a learner’s understanding of an explanation better
than words alone.
Students in the "Very Low" category of computer user self-efficacy (CUSE)
did not have significant gains from pre- to post-test scores. These students also had
the lowest post-test score of all of the CUSE groups. These findings confirm other
researchers' suggestions that a student's belief in his/her own capabilities affects
performance. Also, gain scores were significantly higher for the MM Group than the
No MM Group in the Above Average CUSE ranking. The more confidence a student
has with computers might be a contributing factor in a student’s success with
multimedia instruction.
Students having no experience with database software had significant gain
scores, consistent with Mayer's individual differences principle which says that
multimedia design effects are stronger for low-knowledge learners. Students who
rated themselves as "Very Low" on a computer self-efficacy survey had no learning
gains, consistent with self-efficacy research. Moderate to strong visual learners did
not experience improved test scores, raising questions of the importance of
assessment alignment with instruction. Additionally, having high speed Internet
access also may have had an effect upon learning in the multimedia group.
Ongoing research in dynamic versus static multimedia instruction is needed
to add knowledge to this rapidly growing field. As a result, the researcher continues
to probe and ask the following questions:
vii
•
How can multimedia be most effectively used in online learning
environments?
•
When should it be used?
•
What other variables involved in multimedia instruction might be important?
viii
TABLE OF CONTENTS
ACKNOWLEDGMENTS ...................................................................................................... v
ABSTRACT ............................................................................................................................ vi
LIST OF TABLES................................................................................................................. xiv
LIST OF FIGURES................................................................................................................ xv
CHAPTER 1: INTRODUCTION .......................................................................................... 1
Online Learning Environments: Expanding Definitions ............................................. 2
Learning and Teaching in an Online Environment ...................................................... 3
Background of the Problem.............................................................................................. 4
Challenges in Online Teaching.................................................................................... 4
Meeting the Needs of the "Net Generation" .............................................................. 8
Theoretical Framework..................................................................................................... 9
Statement of the Problem ............................................................................................... 10
Importance of the Study ................................................................................................. 10
Assumptions..................................................................................................................... 12
Limits................................................................................................................................. 12
Delimits ............................................................................................................................. 13
CHAPTER 2: REVIEW OF THE LITERATURE............................................................... 14
ix
The Growth and Evolution of Online Learning Environments ................................ 14
Mayer’s Cognitive Theory of Multimedia Learning................................................... 16
Dual Channel Assumption ........................................................................................ 16
Limited Capacity Assumption .................................................................................. 17
Active Processing Assumption.................................................................................. 17
Multimedia Learning ...................................................................................................... 19
Multimedia Principle .................................................................................................. 19
Spatial Contiguity Principle....................................................................................... 19
Temporal Contiguity Principle.................................................................................. 20
Coherence Principle .................................................................................................... 20
Modality Principle....................................................................................................... 20
Redundancy Principle ................................................................................................ 20
Individual Differences Principle ............................................................................... 21
Clark and Mayer’s Additional e-Learning Principles ................................................ 21
Personalization Principle............................................................................................ 21
Interactivity Principle ................................................................................................. 21
Signaling Principle ...................................................................................................... 21
Motivation Principles...................................................................................................... 24
Dual Coding Theory........................................................................................................ 26
Moore’s Transactional Distance Theory....................................................................... 29
Criticism of Moore's TDT ........................................................................................... 31
Computer Self-Efficacy ................................................................................................... 32
x
Learning Styles................................................................................................................. 33
Review of Learning Styles Theories.............................................................................. 35
Instructional Interactivity ............................................................................................... 38
Four Essential Elements of Instructional Interactivity........................................... 39
Epistemological Underpinnings of Learning with Technology................................ 40
Additional Research on the Effects of Multimedia in Online Learning .................. 42
CHAPTER 3: METHODS AND PROCEDURES.............................................................. 45
Research Questions and Hypotheses Statements ....................................................... 45
Research Design............................................................................................................... 46
Participants ....................................................................................................................... 46
Treatment.......................................................................................................................... 49
Instruments....................................................................................................................... 50
Pre- and Post-Tests...................................................................................................... 50
Computer User Self-Efficacy Survey ........................................................................ 52
Learning Styles Survey............................................................................................... 53
Data Collection................................................................................................................. 55
Data Analyses................................................................................................................... 56
CHAPTER 4: RESULTS....................................................................................................... 58
Introduction...................................................................................................................... 58
Distribution of Data......................................................................................................... 58
xi
Independence of Samples............................................................................................... 61
Learning Outcomes between Groups ........................................................................... 61
Learning Outcomes within Groups .............................................................................. 62
Computer User Self-Efficacy (CUSE) Analyses........................................................... 62
Gain Scores across CUSE Groups ............................................................................. 63
Visual and Verbal Learning Styles Analyses............................................................... 69
Experience with Microsoft Access ................................................................................... 72
Interaction Effects of CUSE Rankings, ILS Groups, Experience with Database
Software, and Instructional Groups on Gain Scores............................................... 75
Correlations between CUSE Scores, Visual/Verbal Learning Preferences,
Experience with Database Software, and Pre- and Post-Test Scores.................... 76
Predicting Post-Test Scores using Regression Analyses ............................................ 76
High Speed Internet Access and Post-Test Scores in the MM Group ...................... 78
CHAPTER 5: CONCLUSIONS........................................................................................... 80
Revisiting the Original Research Questions ................................................................ 80
Conclusions ...................................................................................................................... 80
Question One ............................................................................................................... 81
Question Two............................................................................................................... 83
Recommendations for Future Research ....................................................................... 85
REFERENCES....................................................................................................................... 90
xii
APPENDIX A...................................................................................................................... 100
Computer User Self-Efficacy (CUSE) Survey ............................................................ 100
APPENDIX B ...................................................................................................................... 105
Index of Learning Styles (ILS) Survey ........................................................................ 105
GLOSSARY ......................................................................................................................... 111
xiii
LIST OF TABLES
Table 1
Clark and Mayer’s Eight Multimedia Principles (2003)................................. 23
Table 2
Tests of Normality by Instructional Groups.................................................... 59
Table 3 Means and Standard Deviations of No MM and MM Groups ..................... 60
Table 4
Dependent Samples t-test on CUSE Groups ................................................... 64
Table 5 Wilcoxon Signed Ranks Test of CUSE Groups and Gain Scores.................. 65
Table 6 Post-Test and Pre-Test Scores Arranged by CUSE Rankings ....................... 66
Table 7 Pre- and Post-Test Scores Categorized by Type of Instruction and CUSE
Groups ................................................................................................................... 67
Table 8 Mean Ranks of Visual/Verbal Preferences Compared to Post-Test Scores 72
Table 9
Test Scores Categorized by Experience with Microsoft Access.................... 73
Table 10 Mean Ranks of Experience with Microsoft Access to Post-Test Scores ....... 74
Table 11 Experience with Microsoft Access across Gain Scores................................... 74
Table 12 Regression Statistics Details............................................................................... 77
Table 13 High Speed Internet Users Mean Scores by Instructional Groups............... 78
xiv
LIST OF FIGURES
Figure 1. Cognitive theory of multimedia learning. ....................................................... 18
Figure 2. General model of Dual Coding Theory (DCT)............................................... 28
Figure 3. Essential elements of instructional interactivity. ............................................ 39
Figure 4. Age range distribution of participants (N=60)................................................ 48
Figure 5. CUSE groups and gain scores by instructional groups.................................. 69
Figure 6. Percentages of learning styles in Visual/Verbal continuum for No MM
group.............................................................................................................................. 70
Figure 7. Percentages of learning styles for MM group. ................................................ 71
xv
1
CHAPTER 1: INTRODUCTION
This is an exciting, yet challenging time for online instruction. The increasing
availability of high-speed Internet, faster and more powerful personal computers, and
wireless Internet hot spots provide learners with more opportunities to access, view, and
participate in an online learning environment. More web-based applications are being
developed and deployed to meet the expanding demands of the mobile learner. In the
process, we are seeing student-teacher roles being transformed, as students shoulder
more of the responsibility for learning and teachers assume roles of mentors and guides.
"The locus of ownership of both the process of constructing and sharing knowledge, and
of knowledge itself, is shifting. Learners are not only willing to participate in the
construction of knowledge; they are starting to expect to" (The Horizon Report, 2005
edition, 2005, p. 4).
Yet, many challenges surface from this new evolution of teaching, learning, and
technologies. As instructors strive to create more meaningful, useful, and engaging
online content they are faced with choosing the appropriate software, learning how to
use it, and most importantly, using it in the most effective ways for learning. Therefore,
this research was undertaken to evaluate the effectiveness of multimedia-enhanced
instruction in an online learning environment.
Knowing how and when to use multimedia can be guided by Mayer's (2003)
multimedia principles. However, it is also important to understand that the medium
alone is simply a way of delivering instruction. Well-conceived and implemented
2
instructional strategies should form the underlying structure of the medium. “Effective
instruction, independent of particular media, is based upon the selection and
organization of instructional strategies, and not simply the medium per se” (Hannfin &
Hooper, 1993, p. 192). The critical features of effective instructional media are
pedagogical, not technical (R. E. Clark, 1983). Therefore, in evaluating the effectiveness
of multimedia instruction, it must be understood that the instruction must include and
demonstrate research-based multimedia learning principles.
Learning is complex, with students responding emotionally, imaginatively, and
socially to instruction (Eisner, 2005). A cognitive approach emphasizes learning as an
interconnected process, with the student actively involved in mediating learning.
Therefore, besides evaluating the effectiveness of learning in a multimedia environment,
other factors are addressed, such as differences in student learning styles, attitudes
towards computers, and background knowledge. In evaluating these additional factors,
it is anticipated that a richer and more complete picture of the effectiveness of
multimedia-enhanced learning will be revealed.
Online Learning Environments: Expanding Definitions
There are many words that are used to define online learning environments,
such as distance education, e-learning, web-based instruction, online learning, extended
learning, the use of course management systems, such as Blackboard or WebCT, and
hybrid or blended learning, which integrates both face-to-face and an online component.
The line of demarcation between traditional face-to-face learning and online learning is
becoming more blurred, with many face-to-face courses being augmented and enriched
3
by online components, such as asynchronous and synchronous online discussions, the
posting of assignments, materials, and grades online, online submission of assignments,
and virtual meeting spaces for student collaboration. At Arizona State, for instance
11,000 students take fully online courses and 40,000 use the online course management
system.
At Boise State University, the percentage of students enrolled in Blackboard, a
course management system, further illustrates the spread of online learning in
traditional, face-to-face courses, with 72% of the total school population enrolled as of
fall, 2002 (Academic Affairs Annual Report, 2002). Therefore, the definition of “online
learning” or “online learning environment” in this study shall be expanded to include
any instruction that uses technology to deliver some or all of a course that can be
accessed via a web browser.
Learning and Teaching in an Online Environment
The Internet and WWW can be described as hypertext learning environments,
where students can work when and how they wish, access rich, comprehensive
resources for research and discussion, and communicate with their instructor and
classmates in multiple, nonlinear ways. Marchionini (1988) described the hypertext
learning environment as a self-directed, information-fluid environment with high
teacher-learner interaction. Online learning environments can be unplanned and
discovered as well as learner-activated, self-motivated, self-directed, non-sequential,
dynamic, and recursive. The Internet can offer a unique learning space that is exciting
4
and powerful, with learners determining how, when, and what is to be learned (Wang
& Bagaka's, 2003).
Although there remains disagreement on whether or not this medium of learning
is as effective as or better than the traditional, face-to-face method of learning (Bachman,
1995; Collins, Hemmeter, Schuster, & Stevens, 1996; Denman, 1995; Ellery, Estes, &
Forbus, 1998; Rintala, 1998; Russell, 1999), a student’s experience with online instruction
is different in key ways. Online instruction requires the teacher to view and understand
learning from new paradigms, to teach from different perspectives, and to use evolving
teaching strategies and technologies to effectively help students learn. In fact, it has been
suggested that the traditional model of systematic instructional design may no longer be
appropriate for teaching with these new technologies (Gillespie, 1998; Pelz, 2004).
Background of the Problem
Challenges in Online Teaching
Since the emergence of the Internet and the World Wide Web (WWW) in
providing instruction in the mid-1990s, there have been numerous studies about the
problems of designing web-based instruction. Most of these studies have had “common
shortcomings” in that they have failed to develop a theoretical or conceptual framework
of web-based, or online instruction (Jung, 2001, p. 526). Indeed, the process of designing
online instruction can be so complex and difficult that educators often end up “adopting
curriculum to fit the technology rather than selecting the proper technology to support
the curriculum” (Bennett & Green, 2001, p. 1).
5
According to Green’s 2004 Campus Computing Survey (2004), assisting faculty
efforts to integrate technology into instruction has remained a challenge in higher
education. Also, there is conflicting research on what constitutes effective online
learning experiences (Dillon & Gabbard, 1998; Ellery, Estes, & Forbus, 1998; Frear &
Hirschbuhl, 1999; Honey, 2001; Laurillard, 2003; Quitadamo & Brown, 2001). Many
educators now believe that the unique environment of online learning necessitates a
reexamination of the learning process, in many instances a paradigm shift in
pedagogical practice (Bennett & Green, 2001; Gillespie, 1998; Idrus & Lateh, 2000; Jung,
2001; Laurillard, 2003). For instance, changing a traditional face-to-face course to an
online course does not mean posting lectures online in a text-based format. Rather, it
involves a transformation of both teaching and learning, a process that requires training
and possibly a change in an instructor’s style and expectations.
Time is another challenge. Faculty must work with time constraints and
communicate and follow-through with email, grading, discussion boards, and online
chats. They must be able to support and nurture a community of learners, motivate and
inspire, gain their attention, and get them to learn. At the same time, faculty must also
be cognizant of available and evolving technologies and how to use them to effectively
support and enhance student learning. As a result, educators need to constantly reflect
upon, improve, and update their practice, understanding how to best design instruction
to support student learning. These can be difficult, if not impossible, goals, given the
time that most instructors of higher education must spend on teaching, research, and
service (Turley, 2005).
6
Faculty may also need to learn new skills to create and implement rich online
learning experiences. Those who want to augment their instruction with online
components need to learn how to use those tools, such as synchronous meetings,
tutorials, simulations, multimedia lessons, instant messaging, blogs, wikis, RSS, the use
of course management systems, and other interactive multimedia formats. Additionally,
instructors need to understand human learning processes. As Clark and Mayer (2003)
tell us, when the limits of human cognitive processes are ignored, instruction that
employs all of the technological capabilities to deliver text, audio, and video can actually
reduce or hinder learning. An understanding of educational psychology, instructional
design, multimedia production, graphics, and interface design are necessary to translate
these principles into effective online instruction. Although new technologies ease the
burden of knowing a programming language, it still takes from ten to twenty times
more labor and skill to produce good courseware for online learning than for traditional
classrooms (Clark & Mayer, 2003).
Another challenge of online learning environments is the shortage of technical
staff to help faculty, students, and staff. This shortage can put a strain on developing
web-based programs and delay worthwhile projects.
Many issues continue to confront institutions of higher education in the realm of
online learning. In the October, 1997, report on “Distance Education in Higher
Education” (Lewis, Alexander, & Westat, 1997), higher education institutions were
reported as having the following goals for development of distance learning programs:
•
Reducing per-student costs
•
Making educational opportunities more affordable for students
7
•
Increasing institution enrollments
•
Increasing student access by reducing time constraints for course taking
•
Increasing student access by making courses available at convenient locations
•
Increasing institutional access to new audiences
•
Improving the quality of course offerings
•
Meeting the needs of local employers
The good news is that instructors now have access to rich multimedia tools to
enhance instruction. The bad news is that multimedia software is often used in
instructionally-deficient ways. For instance, PowerPoint is multimedia software that is
easy to use, but can be detrimental to learning if used in the wrong ways. It is still
common to see instructors read textual bullets from a PowerPoint, a method that violates
Mayer's multimedia and redundancy principles.
Faculty in higher education may need to receive training on how to effectively
integrate multimedia in instruction. This is indicated by the availability of training
courses offered by various universities. An example of one course offered by the Illinois
Online Network, "Multimedia Principles for Online Educators," available at
http://www.ion.uillinois.edu/courses/catalog/C-CourseDetail.asp?course=11
provides instruction of how to effectively design multimedia instruction. As Richard
Felder, designer of the Index of Learning Styles (ILS) survey, bluntly writes:
8
College teaching may be the only skilled profession for which no preparation or
training is provided or required. You get a Ph.D., join a faculty, they show you
your office, and then tell you, “By the way, you're teaching 205 next semester.
See you later.” (Felder, 2006, p. 1).
Meeting the Needs of the "Net Generation"
We are experiencing and educating a new generation of learners, sometimes
called the "Net Generation," students who have grown up with technology and
computers. These students have different skills and needs in the realm of instructional
technology and bring a new set of expectations to the classroom.
For instance, here are some informal comments from Net Generation students in
response to the open ended questions, "To me, technology is . . . " (Roberts, 2005)
•
"Reformatting my computer system and installing cutting-edge software that
allows me to do what I want, when I want, without restrictions, viruses, and the
rules of Bill Gates." —Jody Butler, Junior, Idaho State University
•
"The ability to adapt and configure an already established program to
[something that] benefits me daily, be it customizing WeatherBug to state the
weather in my particular region or formatting my cell phone pad to recognize
commonly used phrases in text messaging." —Christopher Bourges, Senior,
Duke University
•
"Any software and hardware alike that gives me the power to do what I need to
do faster than ancient methods of conducting things, such as e-mailing versus
writing, messaging three people versus buying a three-way calling package,
9
digital research versus traveling to a well-stocked library, et cetera." —Lindsey
Alexovich, Senior, American University
In these short narratives, one can clearly see the importance of staying in tune
with one's students and their technology expectations. Instructors, therefore, need to
keep abreast of new technologies and how students use them. They need to design
instruction that is relevant and engaging, knowing that students have high expectations
for content, accessibility, and easy of use.
Theoretical Framework
The theoretical framework of this research will be based on understanding
various theories that support multimedia learning, aspects and theories of learning
online and a brief overview of the epistemological underpinnings of learning in an
online environment by discussing the following:
•
Mayer’s cognitive theory of multimedia learning (R. C. Clark & Mayer,
2003; Mayer, 2003)
•
motivation principles (Keller & Burkman, 1993)
•
dual coding theory (Paivio, 1986; Sadoski & Paivio, 2001);
•
Moore’s theory of transactional distance (Moore, 1993);
•
Visual/verbal learning styles (Felder & Silverman, 1988);
•
computer self-efficacy (Cassidy & Eachus, 2002);
•
instructional interactivity (M. W. Allen, 2003); and
•
epistemological underpinnings of learning with technology.
A detailed discussion of this theoretical framework is included in Chapter Two.
10
Statement of the Problem
As stated earlier, there are many challenges in integrating effective multimedia
instruction. Besides the advanced technology skills that instructors must possess, they
must also be able to research, evaluate, and choose the software, learn how to use it, and
then design effective instruction. Instructors may not have the expertise or time to
effectively design web-based course materials (Kekkonen-Moneta & Moneta, 2002;
Okamoto, Cristea, & Kayama, 2001; Oliver, MacBean, Conole, & Harvey, 2002). Also,
evaluating the effectiveness of multimedia instruction can be complicated and prone to
multiple interpretations (Ellis & Cohen, 2001; Laurillard, 1998). Clark also suggests that
little research exists proving the effectiveness of one instructional medium over another
(1983).
Therefore, this research is undertaken to tackle the problem of knowing how and
when to use multimedia-enhanced instruction in online learning environments. It draws
attention to the importance of adhering to strict principles of multimedia design, while
also taking into consideration other elements of learning. Also, this study was conducted
to address the limited research of the effects of multimedia as observed in an
educational setting.
Importance of the Study
This study is an important contribution to the research of and understanding
how to use web-based multimedia instruction as a learning tool. Colleges and
universities are using the Internet and WWW more and more to deliver instruction, and
instructors and courseware designers need to have valid information on what
11
technologies are available and how to use them to improve student learning. Students
of the "Net Generation" expect and demand high quality, fully accessible course
materials available online. Additionally, Macromedia Breeze, software that allows
synchronous meetings and high quality asynchronous productions suitable for online
presentation, has recently been purchased by the Department of Educational
Technology at Boise State University and has been in use for the past year. This software
not only allows instructors to provide instructional content available 24/7, but also to
transform the teaching-learning environment to encourage more interaction , to narrow
the transactional distance often found in an online learning environment, and to create
new pedagogical models from which to teach and learn. For instance, online study
groups can be formed, with students using a virtual "room" to meet and collaborate,
brainstorm, or present their work. These meetings can be conducted with web cameras
and microphones, enabling a seamless, virtual environment for learning and sharing.
Decisions to purchase multimedia software by university departments can be
justified through this research. Software companies would gain feedback about the
usefulness of their products in an educational setting.
Finally, addressing and comparing the effects of additional factors involved in
the research outcomes, such as individual learning styles, computer self-efficacy,
background knowledge, gender, and age will provide a more expansive interpretation
of the study.
12
Assumptions
This research will be based on the following assumptions:
•
Students will answer surveys honestly.
•
Randomization of the student sample will meet the assumption of independence
of samples.
•
Databases are the most difficult part of the course content and are most suitable
for a multimedia-enhanced lesson.
•
Students learn differently.
•
Students have different attitudes toward their learning abilities using computers
(computer self-efficacy).
•
Students have different background knowledge of database software.
•
Students who are instructed to do so will view and interact with the required
multimedia lessons.
•
Students will complete their instruction.
•
Cognitive learning theory is a valid theory of how people learn.
Limits
The following will limit generalizability of the research:
•
Student sample (60) is limited to students in four sections of an introductory
educational technology class at Boise State University.
•
Index of Learning Styles (ILS) survey not identified as an appropriate measuring
instrument until the spring 2006 semester; therefore only 34 responses were
collected on this variable.
13
•
Affective surveys were self-reported.
•
Test instruments are not intended for general use outside Boise State University.
Delimits
The following will delimit the research:
•
The student sample is purposive and convenient.
•
Database skills, one module of the course, will be evaluated.
•
The researcher will be the sole instructor.
14
CHAPTER 2: REVIEW OF THE LITERATURE
The Growth and Evolution of Online Learning Environments
Online learning environments in various formats are rapidly growing in
institutions of higher education. Enrollment in online learning is predicted to continue to
increase. In fact, online enrollment is growing faster than student enrollment. In a Sloan
Consortium survey, 53.6 percent of institutions agreed that online education is critical to
their long-term strategy (I. E. Allen & Seaman, 2004). And a majority of academic leaders
stated their belief that the quality of online instruction is equal to or better than the
quality of traditional instruction (Oblinger & Oblinger, 2005).
The majority of institutions offer some type of online learning today. Three-fifths
(62.5 percent) of the colleges and universities that participated in Green’s 2002 Campus
Computing Survey offer at least one complete online or web-based college course (2003).
An online directory of distance learning
(http://www.petersons.com/distancelearning/) identifies about 1,100 institutions that
provide online degree programs. Some of these include Azusa Pacific University
(evangelical Christian), Boston University (large private), Cardean University (online,
for-profit), De Anza College (two-year public), DeVry University (multi-campus, forprofit), Michigan State University (large public), Boise State University (medium public),
and eArmyU (U.S. military).
Universities that offer degrees entirely online are rapidly expanding and
marketed to working professionals and other nontraditional students. The University of
15
Phoenix (http://degrees.uofphx.info/), for instance, serves approximately 45,000
adult learners in its online degree program, placing Phoenix Online among one of the
ten largest colleges or universities in the United States. According to Business Week
Online, the corporate e-learning market was projected to be $11.4 billion by 2003
(Schneider, 2000). As the e-learning market gains momentum and increasing visibility,
some universities are also adapting it for the business sector, by spinning off their online
coursework into separate, for profit ventures, such as Duke University's J.B. Fuqua
School of Business.
This rapid growth is due to many factors, such as the increasing sophistication
and accessibility of the Internet, the changing demographics of university students, the
decreasing costs of computing, and the need for people to have flexible options for
learning (Cooper, 2001; Pasquinelli, 1998). An online learning environment is one way to
provide a medium of instruction that enables faculty to extend teaching and learning
activities. However, as stated earlier, the concept of online learning is expanding to
include any form of learning that is done via a web browser. For instance, an online
learning environment can be totally online, where students are not required to come to a
physical classroom. It can be hybrid instruction, where students spend part of their time
in the classroom and the other part learning online. It can also be an element of
traditional, face-to-face classrooms, where instruction is augmented by online
components.
In the following sections, the theoretical framework, a natural extension of the
literature review, is discussed, ensuring that the search for concepts central to the
problem under investigating are understood and known research is applied. This
16
investigation also provides frameworks within which concepts and variables acquire
their own significance and will help in interpreting the larger meaning of the findings.
Mayer’s Cognitive Theory of Multimedia Learning
Mayer is well-known and respected for his research in the field of cognitive
theory as it relates to multimedia learning. His seminal work, Multimedia Learning (2003),
is rich with research on how people learn through various multimedia instructional
messages. According to Mayer, a multimedia instructional message is a presentation
“involving words (such as spoken or written text) and pictures (such as animation,
video, illustrations, and photographs) in which the goal is to promote learning” (2002, p.
56). The driving question in his research at the University of California, Santa Barbara,
has been to understand how multimedia instructional messages should be designed so
that learners can demonstrate deep, conceptual understanding. Mayer links cognitive
learning theory to multimedia design issues, validating three theory-based assumptions
about how people learn from words and pictures: the (1) dual channel assumption, the
(2) limited capacity assumption, and the (3) active processing assumption.
Dual Channel Assumption
The dual channel assumption is based upon the theory that human cognition
consists of two distinct channels for representing and handling knowledge: a visualpictorial channel and an auditory-verbal channel. This theory says that pictures enter
through the eyes and are processed as pictorial representations in the visual-pictorial
channel. The other channel consists of the auditory-verbal channel or verbal
17
representations, which includes the process of spoken words entering the cognitive
structure through the ears.
Limited Capacity Assumption
Limited capacity assumption is exemplified by auditory-verbal overload, when
too many visual materials are presented at one time. Each channel in the human
cognitive system has a limited capacity for holding and manipulating knowledge
(Baddeley, 1999a, 1999b), so when a lot of spoken words and other sounds are presented
at the same time, the auditory-visual channel can become overloaded.
Active Processing Assumption
The third of Mayer’s assumptions, active processing, implies that “meaningful
learning occurs when learners engage in active processing within the channels,
including selecting relevant words and pictures, organizing them into coherent pictorial
and verbal models, and integrating them with each other and appropriate prior
knowledge” (2002, p. 60). Important to this assumption is the fact that these “active
verbal processes are more likely to occur when corresponding verbal and pictorial
representations are in working memory at the same time” (2002, p. 60). All of these
assumptions are important points to consider in designing and delivery multimediaenhanced online instruction.
18
Mayer further explains,
Words enter the cognitive system through the ears (if the words are spoken), and
pictures enter though the eyes. In the cognitive process of selecting words, the
learner pays attention to some of the words, yielding the construction of some
word sounds in working memory. In the cognitive process of selecting images,
the learner pays attention to some aspects of the pictures, yielding the
construction of some images in working memory. In the cognitive process of
organizing words, the learner mentally arranges the selected words into a
coherent mental representation in working memory that we call a verbal model.
In the cognitive process of organizing images, the learner mentally arranges the
selected images into a coherent mental representation in working memory that
we call a pictorial model. In the cognitive process of integrating, the learner
mentally connects the verbal and pictorial models, as well as appropriate prior
knowledge from long-term memory. (2002, pp. 60-61)
Furthermore, this model is activated through five steps: (a) selecting relevant
words for processing in verbal working memory, (b) selecting relevant images for
processing in visual working memory, (c) organizing selected words into a verbal
mental model, (d) organizing selected images into a visual mental model, and (e)
integrating verbal and visual representations as well as prior knowledge (Mayer, 2003).
Figure 1 is a graphical illustration of the steps in this theory.
Figure 1. Cognitive theory of multimedia learning.
Adapted from Mayer (2003).
19
Multimedia Learning
Mayer’s research has resulted in the discovery of eight principles of multimedia
design, each based on cognitive theory and supported by the findings of empirical
research. These eight principles are explained as follows in more detail, along with their
application and use in this study:
Multimedia Principle
Carefully chosen words and pictures can enhance a learner’s understanding of
an explanation better than words alone. Mayer tells us that students mentally connect
pictorial and verbal representations of the explanation, deeper understanding can occur.
In three studies where students viewed a narrated animation about pumps or brakes or
simply listened to a narration, the students who viewed the narrated animation scored
substantially higher (R. C. Clark & Mayer, 2003). Mayer corroborates his finding with
Rieber’s (1990) finding that students learn better from computer-based science lessons
when animated graphics are also included.
Spatial Contiguity Principle
Mayer’s spatial contiguity principle examines how words and pictures should be
coordinated in multimedia presentations. This principle states that the narration should
be simultaneous with the animation. Also, words and associative pictures should be
near each other. Mayer confirms his research with Baggett and others, showing that
students learn an assembly procedure better when corresponding narration and video
are presented simultaneously (Baggett, 1984, 1989; Baggett & Ehrenfeucht, 1983).
20
Temporal Contiguity Principle
This principle states that students learn better when corresponding words and
pictures are presented at the same time, rather than in succession. In other words, the
narration and animation should be presented in close coordination, so that when the
narration describes a particular process or action, the animation shows it at the same
time. This is described as simultaneous presentation, because the words and pictures are
contiguous in time or reflect temporal contiguity.
Coherence Principle
This principle states that students learn better from multimedia presentations in
which extraneous words, sounds, and video are excluded. Related research on this
principle was presented by Kozmo (1991).
Modality Principle
This principle states that students learn more deeply from animation and
narration than from animation and on-screen text (a common presentation method in
online PowerPoint presentations). In other words, students learn more deeply from
animation and narration than from animation and on-screen text.
Redundancy Principle
This principle states that students learn better from multimedia presentations
consisting of animation and narration than from animation, narration, and on-screen
text.
21
Individual Differences Principle
This principle says that multimedia design effects are stronger for lowknowledge learners and for high-spatial learners. In other words, since high-knowledge
learners already have some background knowledge, they might not need the additional
instruction offered by multimedia learning. Also, high-spatial learners are more likely
able to integrate the visual and verbal representations afforded by multimedia
presentation.
Clark and Mayer’s Additional e-Learning Principles
The following additional multimedia principles are discussed in Clark and
Mayer (2003):
Personalization Principle
Students learn better when words are presented in a conversational style than in
an expository style.
Interactivity Principle
Students learn better when they can control the presentation rate of multimedia
explanations than when they cannot.
Signaling Principle
It is important to incorporate signals into the narration to help the learner
determine the important ideas or concepts and how they are organized. Signaling does
not add any new words to the passage, but rather emphasizes key words through
22
introductory outlines, headings spoken in a deeper voice and keyed to the
presentation, pointer words, and highlighted words spoken in a louder voice.
Signaling can help guide the process of making sense of the presentation by
directing the learner’s attention to key events and relationships. Mayer tells us that
additional research is needed in this area, with prior research focused mainly on
signaling of printed text (Lorch, 1989).
An underlying understanding of these principles involves individual differences.
Researchers have found that high-ability learners are able to process more sensory
information than low-ability learners and that low-ability learners take longer and
require more highly structured information (Cronback & Snow, 1977).
Mayer’s multimedia learning theory offers an indispensable theoretical
framework by providing clear information on how to design effective multimedia
instruction. Clark and Mayer (2003) have collaborated to condense these principles of
multimedia learning, which are more practitioner-based and applicable for this study.
Therefore, for this study, Clark and Mayer’s eight multimedia principles form the basis
for the design of the multimedia instruction. Table 1 includes each of these principles
and their applications.
23
Table 1
Clark and Mayer’s Eight Multimedia Principles (2003)
Principle
Multimedia Principle
Contiguity Principle
Coherence Principle
Modality Principle
Redundancy Principle
Personalization
Principle
Interactivity Principle
Signaling Principle
Definition
Students learn better from words and pictures than from
words alone. Text or auditory alone are less effective than
when the text or narration is augmented with visual images.
Students learn better when corresponding printed words and
graphics are placed close to one another on the screen or
when spoken words and graphics are presented at the same
time.
Students learn better when extraneous words, pictures, and
sounds are excluded rather than included. Multimedia
presentations should focus on clear and concise
presentations. Presentations that add extraneous information
hamper student learning.
Students learn better from animation and narration than from
animation and on-screen text. Multimedia presentations
involving both words and pictures should be created using
auditory or spoken words, rather than written text to
accompany the pictures.
Students learn better from animation and narration than from
animation, narration, and on-screen text. Multimedia
presentations involving both words and pictures should
present text either in written form, or in auditory form, but
not in both.
Students learn better when words are presented in
conversational style than in expository style.
Students learn better when they can control the presentation
rate of multimedia explanations.
Students learn better when signals are incorporated into the
narration to highlight important ideas or concepts and how
they are organized. Signaling emphasizes key words through
introductory outlines, headings spoken in a deeper voice,
pointer words, and highlighted words spoken in a louder
voice.
24
Motivation Principles
Another important factor involved in the process of designing excellent
instructional messages is the extent of motivational appeal. For the learner, “motivation
is an initial determining factor that colors all that follows in a learning event” (Keller &
Burkman, 1993, p. 3). In fact, motivation is so important that Keller and Burkman insist
that the “design of an instructional message is not complete without considering its
motivational appeal” (p. 3). Therefore, for this study, principles of motivation will be
considered throughout the design and development of the multimedia lessons. A brief
discussion of the motivation principles appropriate for this study follows.
Many of the motivational principles of Keller and Burkman (1993) focus on (1)
gaining and maintaining attention, (2) relating the content of materials to learner
interests, goals, or past, and (3) building and maintaining learner confidence in ability to
use the materials. The following motivational directives will also be used to guide the
design of the multimedia lessons:
1. Introduce problem-solving topics to stimulate an attitude of inquiry.
2. Use humor to stimulate curiosity.
3. Use explicit statements about how the instruction builds on the learner’s existing
skills or knowledge.
4. Use analogies or metaphors to connect the present material to processes,
concepts, and/or skills already known by or familiar to the learner.
5. The motivation to learn is greater when there is a clear relationship between the
instructional objectives and the student’s learning goals.
25
6. Use personal language to stimulate human interest on the part of the learner.
7. Improve relevance by adapting your teaching style to the learning style of the
students.
8. Design the challenge level to produce an appropriate expectancy for success.
9. Give learners information on what they will learn ahead of time, so they know
where they will be going.
10. Build confidence and persistence by using easy to difficult sequencing of content,
exercises, and exams, especially for less able and low-confidence students.
11. Provide criteria for success and answers to exercises to encourage students to use
self-evaluation of performance (performance-based assessment).
12. Include learner options to promote an internal sense of control on the part of the
learner.
13. Allow learners to go at their own pace to increase motivation and performance.
14. Promote feelings of accomplishment by including, in the instructional materials,
exercises or problems that require the application of the new knowledge or skill
to solve.
15. Use the active voice to maintain learner attention.
16. Use a natural word order to maintain learner attention.
17. Include graphics that make courseware easier to interpret and use in order to
maintain learner attention and to build confidence.
18. Use interesting pictures to gain and maintain learner attention in instructional
text.
19. Include pictures that include novelty and drama to maintain learner attention.
26
20. Include pictures that include people to gain and maintain learner attention.
(Keller & Burkman, 1993, pp. 31-49)
Dual Coding Theory
Dual coding theory (Paivio, 1986) proposes that information is stored in longterm memory as both verbal propositions and mental images. This theory is aligned
with Mayer’s multimedia learning theory, stating that when information is presented
verbally and visually, it has a better chance of being remembered. Corroborating
research shows that concrete words are remembered better than abstract words, and that
pictures alone are remembered better than words alone (Fleming & Levie, 1993). Paivio
states, "Human cognition is unique in that it has become specialized for dealing
simultaneously with language and with nonverbal objects and events. Moreover, the
language system is peculiar in that it deals directly with linguistic input and output (in
the form of speech or writing) while at the same time serving a symbolic function with
respect to nonverbal objects, events, and behaviors. Any representational theory must
accommodate this dual functionality" (1986, p 53).
Paivio used the word “coding” to refer to the coding mechanisms humans use to
process textual and visual components. Although these coding mechanisms are
separate, they are also sometimes complementary. Dual coding theory (DCT) says that
text uses a linguistic coding mechanism, encoding information in serial form, while
graphics uses an imagery system, encoding information in a spatial format.
Dual coding theory can be visualized as a framework of two cognitive
subsystems, one being composed of verbal stimuli and the other, nonverbal stimuli. As
27
stated above, these two connections are not distinct, but are connected. Paivio defines
two different types of representational units: "imagens" for mental images and
"logogens" for verbal entities. Furthermore, DCT identifies three types of processing: (1)
representational, the direct activation of verbal or non-verbal representations; (2)
referential, the activation of the verbal system by the nonverbal system or vice-versa;
and (3) associative processing, the activation of representations within the same verbal
or nonverbal system. A given task may require any or all of the three kinds of
processing. A general model of DCT is illustrated in Figure 2, which shows the verbal
and nonverbal systems including representational units and their referential (between
systems) and associative (within systems) interconnections.
28
Figure 2. General model of Dual Coding Theory (DCT).
© 1994-2004 Greg Kearsley ([email protected])
http://home.sprynet.com/~gkearsley
Permission is granted to use these materials for any educational, scholarly, or noncommercial purpose.
As previously discussed, Mayer’s (2003) multimedia learning theory is based on
the assumptions that humans possess separate systems for processing pictorial and
verbal material (dual channel assumption), each channel is limited in the amount of
material that can be processed at one time (limited-capacity assumption), and
meaningful learning involves cognitive processing including building connections
between pictorial and verbal representations (active-processing assumption). Paivio’s
(1986) dual coding theory supports Mayer’s multimedia learning theory (2003) and
29
helps explain the concept of cognitive overload, in which the learner’s intended
cognitive processing exceeds his/her available cognitive capacity.
A similar view of dual coding theory is called dual-processing theory by Moreno
and Mayer (1999). This theory supports multimedia learning and includes two types of
processing: visual and auditory. Moreno and Mayer tell us that visually-presented
information is represented initially in visual working memory and then translated into
sounds in auditory working memory, while auditorily-presented information is
represented and processed entirely in auditory memory. Therefore, in interacting with
multimedia instruction consisting of images and narration, learners represent the images
in visual working memory and the corresponding narration in auditory working
memory, thus avoiding the possibility of cognitive overload that could be caused by
reading and processing text from visual to auditory working memory. Because students
can hold corresponding visual and verbal representations in working memory at the
same time, they are able to build referential connections between them. Therefore, it
would seem prudent to design multimedia instruction with minimal textual input and
more narration with corresponding images.
Moore’s Transactional Distance Theory
Moore’s (1993) transactional distance theory (TDT) can be a useful theory from
which to frame this research. This theory describes pedagogical relationships existing in
an online learning environment as “the family of instructional methods in which the
teaching behaviors are executed apart from the learning behaviors, including those that
in contiguous teaching would be performed in the learner’s presence, so that
30
communication between the teacher and the learner must be facilitated by print,
electronic, mechanical, or other devices” (Moore, 1972, p. 76). TDT first appeared in 1972
and has been reworded as changes in instruction have occurred, specifically as delivery
of instruction online has increased. Researchers have tested this theory since then across
different technologies, such as videoconferencing, interactive television, and computer
networks (Bischoff, Bisconer, Kooker, & Woods, 1996; Chen & Willits, 1999; Gayol, 1995;
Saba & Shearer, 1994)
According to Moore, there are three key elements that define every online
learning environment:
1. dialogue;
2. structure; and
3. learner autonomy.
Dialogue refers to the extent to which teachers and learners interact with each other,
structure refers to the responsiveness of instruction to a learner’s needs, and learner
autonomy corresponds to the extent to which learners make decisions regarding their
own learning and construct their own knowledge (Moore & Kearsley, 1996).
The degree of transactional distance between the teacher and learner is related to
the amount of dialogue, course structure, and learner autonomy. In other words,
transactional distance would be greatest when the teacher had no interaction at all with
the student and the learning materials are pre-designed and unresponsive. In an online
course, an instructor would need to interact regularly with the student, be responsive
and supply materials as needed to enhance the instruction, and respect the student’s
autonomy in order to minimize transactional distance. Another way of looking at TDT is
31
that transactional distance decreases when dialogue increases and structure decreases,
and when structure increases transactional distance also increases, but dialogue
decreases.
In 2003, Laurillard expanded Moore’s ideas by rating how and to what extent
different types of media could be used by instructors to provide high quality learnerinstructor and learner-content interactions. Content alone presented in certain forms and
by particular types of media could become a virtual teacher, Laurillard suggested. The
media that received the highest ratings for teaching were tutorial systems, simulations,
and programs, microworlds, electronic collaborations or teamwork tools, and
multimedia and audio resources.
Criticism of Moore's TDT
This theory, however, has not been without criticism. Through their critical
analysis of transactional distance theory, Gorsky and Caspi (2005) insist that the theory
should be reduced to a single relationship: as the amount of dialogue increases,
transactional distance decreases. Also, Gorsky and Caspi state that this relationship
should be considered as tautology, not theory. They write:
Transactional distance theory was accepted philosophically and logically since its
core proposition (as the amount of dialogue increases, transactional distance
decreases) has high face validity and seems both obvious as well as intuitively
correct. Indeed, the philosophical impact of Moore’s theory remains.
Unfortunately, however, the movement from abstract, formal philosophical
definitions to concrete, operational ones caused ambiguity, at best, and collapse
of the theory, at worst. (Gorsky & Caspi, 2005, pp. 9-10)
32
Although there is some controversy over whether TDT is a theory, the
researcher will recognize the implications of TDT, by understanding the relationship of
dialogue between instructor and student through the use of narrated multimedia
instruction.
Computer Self-Efficacy
Computer self-efficacy, a student’s attitude toward computers, is an important
element involved in learning. Self-efficacy is defined as one’s perception of his or her
ability and achievement and has been found to be one of the best predictors of academic
performance and achievement (Bandura, 1977). Research in the field of self-efficacy
shows that self-efficacy will influence one’s choice of whether to engage in a task, the
effort used in performing it, and the persistence shown in accomplishing it (Bandura,
1977, 1982; Bandura & Schunk, 1981; Barling & Beattie, 1983; Bouffard-Bouchard, 1990;
Brown, Lent, & Larkin, 1989; Hackett & Betz, 1989). For instance, students with higher
self-efficacy tend to work harder and persevere longer when working on a challenging
assignment.
Computer experience has been shown to relate to levels of computer self-efficacy.
Torkzadeh and Koufteros (1994) found that the computer self-efficacy of a sample of 224
undergraduate students increased significantly following a computer training course. In
another study, researchers found a significant positive correlation between previous
computer experience and computer self-efficacy beliefs (Hill, Smith, & Mann, 1987). A
study on gender differences in self-efficacy and attitudes toward computers (Busch,
1995) indicated the most important predictors of computer attitudes were previous
33
computer experience and encouragement. Ertmer, Everbeck, Cennamo, and Lehman
(1994) also found that positive computer experience increases computer self-efficacy.
Therefore, computer self-efficacy is important in this study, since it could
potentially affect learning outcomes. If students have a high computer self-efficacy score,
then they might be able to learn information presented on computers more easily than
students who had a lower computer self-efficacy score. Students with a lower computer
self-efficacy score might also resist to learning from computers, making the multimediaenhanced instruction less likely to be successful.
Learning Styles
Another concept central to the problem of determining the effectiveness of
multimedia-enhanced instruction is the interaction of possible learning styles
differences. In other words, might learning style differences affect student learning in a
multimedia-enhanced environment and therefore affect performance?
The concept of learning styles promotes the idea that instruction should be
flexible enough to support different learners. Clark and Mayer (2003) insist that there is
no such thing as a visual or auditory learner. They argue that we learn in essentially the
same way, through building on preexisting cognitive structures and encoding this
understanding into long term memory. “Accommodating different learning styles may
seem appealing to e-learning designers who are fed up with the ‘one-size-fits-all’
approach and to clients who intuitively believe there are visual and auditory learners, ”
Clark and Mayer tell us (2003, p. 101). Furthermore, concepts of learning styles are based
upon what Clark and Mayer term the “information delivery theory” (2003, p. 101),
34
meaning that learning consists of receiving information. Although it is possible that
people may have preferences for learning, the principles of cognitive psychology
indicate that people learn through both auditory and visual channels. This supports the
theory of multimedia learning, which is based upon Clark and Mayer’s assumptions that
“(a) all people have separate channels for processing verbal and pictorial material, (b)
each channel is limited in the amount of processing that can take place at one time, and
(c) learners actively attempt to build pictorial and verbal models from the presented
material and build connections between them” (2003, p. 102).
Additionally, it has been shown that learners tend to not accurately understand
or know their learning styles. In one recent study, participants were surveyed before
taking a course regarding their preferences for amount of practice. They were then
assigned to two online courses—one with many practice exercises and the other with
half the amount of practice. Half of the learners were matched to their preferences and
half mismatched. The results showed that regardless of their preference, those assigned
to the full practice version achieved significantly higher scores on the post-test than
those in the shorter version (Schnackenberg, Sullivan, Leader, & Jones, 1998).
Although there is disagreement about learning styles, possible effects of learning
styles are examined in this study. The concept that we learn in different ways is an
important variable to address in the data analysis of the study and an important element
to consider in the design of the multimedia instruction. Plus, online learning
environments are attractively positioned to address changing or progressive learning
styles, through the inherent flexibility and adaptability of the instruction. For instance, a
student can select from a path of instruction without even consciously thinking of that
35
path being geared toward a learning style. A link to an audio learning lesson can
provide a different learning environment than a link to a visual lesson, for instance. Or,
a more difficult approach for a more skilled student can easily be added to the course
content, as well as a less difficult instructional path for those in need of more help.
Being conscious and respectful of learning styles was deemed to be an integral
part of this study. Therefore, it was necessary to find a learning style instrument that
would not only be an accurate representation of a student’s learning style, but also one
that could be used in statistical analysis. This proved to be another challenge, since there
are differing theories on learning styles and many online survey instruments available to
supposedly measure them. A review of learning style theories follows.
Review of Learning Styles Theories
There has been much research on the importance of learning styles in the design
and delivery of instruction. Felder (1996) tells us that a learning style represents the
particular set of strengths and weaknesses that individuals use as they absorb and
process information. When teachers differentiate instruction to accommodate all
learning styles, they can more closely match the learning preferences of students.
Matching the learning styles with the appropriate instructional styles increases a
student’s opportunity to learn (Vincent & Ross, 2001).
There are numerous examples of learning style models which measure a wide
range of factors, from whether the learner prefers information presented visually or
verbally, through a global perspective or a more linear approach, or in a competitive or
collaborative way. Two of the oldest models of learning styles are Witkin’s Field
36
Dependant/Field Independent Model (Witkin et al., 1954) and the Myers-Briggs Type
Indicator (Soles & Moller, 1995). Field dependant learners are externally motivated and
enjoy working collaboratively. On the other extreme, field independent learners are
those who are intrinsically motivated, competitive, and tend to work alone. The Myer’sBriggs Type Indicator is an instrument for measuring a person’s preferences, using four
basic scales with opposite poles. The four scales are: (1) extraversion/introversion, (2)
sensate/intuitive, (3) thinking/feeling, and (4) judging/perceiving. This test has been
the most widely used personality inventory in history.
Another method of classifying learning styles is the Curry “Onion” Model
(Curry, 1983), which arranges learning style models from those that focus on external
conditions to those that are based on personality theory. Curry categorizes learning
styles into four layers of the “onion:”
1. Instructional & Environmental Preferences are those that describe the
outermost layers of the onion, the most observable traits.
2. Social Interaction Models consider ways in which students in specific social
contexts will adopt certain strategies.
3. Information Processing Models describe the middle layer in the onion, and
are an effort to understand the processes by which information is obtained,
sorted, stored, and used.
4. Personality Models describe the innermost layer of the onion, the level at
which our deepest personality traits shape the orientations we take toward
the world.
37
A more recent learning styles outlook is presented by Martinez (1999), which
outlines four types of learners: Transforming, Performing, Conforming, and Resistant.
The Transforming learner assumes learning responsibilities and enjoys practical-based
learning. The Performing learner will assume learning responsibilities in areas of interest
and enjoys a combination of practical-based and theoretical learning. The Conforming
learner assumes little responsibility, wants continual guidance, and is most comfortable
with theoretical knowledge. The Resistant learner simply avoids learning.
Richard Felder and Linda Silverman (1988) formulated a learning style model
designed to capture the most important learning style differences among engineering
students and provide engineering instructors with help in teaching students. They
developed a survey instrument which they called the Index of Learning Styles (ILS)
(http://www.engr.ncsu.edu/learningstyles/ilsweb.html). The first version of the
instrument (which had 28 items) was administered to several hundred students and
subjected to a factor analysis. Items that did not load heavily on one and only one item
were replaced with new items to obtain the current 44-item version of the instrument.
The ILS was installed on the World Wide Web in 1996.
The ILS model classifies students as having preferences for one category (ranked
in a straight line continuum) or the other in each of the following four dimensions:
•
sensing (concrete thinker, practical, oriented toward facts and procedures) or
intuitive (abstract thinker, innovative, oriented toward theories and underlying
meanings);
•
visual (prefer visual representations of presented material, such as pictures,
diagrams and flow charts) or verbal (prefer written and spoken explanation)
38
•
active (learn by trying things out, enjoy working in groups) or reflective (learn
by thinking things through, prefer working alone or with a single familiar
partner); and
•
sequential (linear thinking process, learn in small incremental steps) or global
(holistic thinking process, learn in large leaps). (Felder & Spurlin, 2005, p. 103)
For the purpose of this study, the visual/verbal dimension would be most
appropriate to examine, since this aligns most closely with the theoretical framework of
the study, specifically Mayer’s theory of multimedia learning and Paivio’s dual-coding
theory.
Instructional Interactivity
Instructional interactivity is a necessary component in the design of the
multimedia instruction for this study. It important to note the difference between
interactivity and instructional interactivity. Allen (2003) tells us that instructional
interactivity is defined as “interactions that actively stimulates the learner’s mind to do
those things that improve ability and readiness to perform effectively” (p. 255). In other
words, instructional interactivity invites the learner to practice new skills or discover
something new. Instructional interactivity is not pushing buttons and interacting with
graphics or animated effects. Rather, it is composed of four essential components,
integrated in “instructionally purposeful ways” (Allen, 2003, p. 255), which are also
illustrated in Figure 3.
39
Four Essential Elements of Instructional Interactivity
•
Context: the framework and conditions
•
Challenge: a stimulus to action within the context
•
Activity: a physical response to the challenge
•
Feedback: a reflection of the effectiveness of the learner’s action
Figure 3. Essential elements of instructional interactivity.
Therefore, a multimedia-enhanced lesson that demonstrates instructional
interactivity would include a context that would encourage learners to enter in,
providing the background needed for the activity. To provide an example based upon
the multimedia instruction planned for this study, the challenge could be identified as
“create a new database, name it ‘Delegates’ and save it to a folder.” The activity would
be for the learner to figure out how to do this, with built in verbal and/or textual
prompts, and the feedback would naturally include a confirmation that this had been
40
done correctly. One type of software used for this study, Macromedia Captivate,
includes the ability to create simulations, where the learner interacts with the software.
The software continues as the user correctly interacts with it.
Instructional interactivity fits in well with Mayer’s multimedia principles and
motivation principles discussed earlier. Identifying and including instructional
interactivity within multimedia-enhanced instruction will further strengthen
pedagogical design and potential of the study.
Epistemological Underpinnings of Learning with Technology
Cognitive learning theory and constructivism support much of the research
behind multimedia learning theory and learning with technology. To provide a more
comprehensive understanding of how learning unfolds in a technology-enhanced
environment, a discussion of both theories follows.
Constructivism could be considered an epistemology, a meaning-making theory
that offers the explanation of how we know and learn (Abdal-Haqq, 1998; MacKinnon &
Scarff-Seatter, 1997). Constructivists posit that knowledge is acquired through
experience with content, rather than through imitation and repetition. The theory of
constructivism forwards the concept that individuals are believed to “construct”
knowledge in their own minds. Consequently, learning also needs to be meaningful: “If
an answer cannot be retrieved, one can be constructed by bringing inferential processes
to bear on activated knowledge so that a plausible answer is generated” (Gagne,
Yekovich, & Yekovich, 1993, p. 118).
41
Constructivists suggest that knowledge results from a continuous process, and
is tested and rebuilt by the learner. Building knowledge structures, therefore, can be
viewed as a “negotiated” process where knowledge can be constructed and
deconstructed in multiple ways, depending upon the context from which it is viewed
(Bodner, Klobuchar, & Geelan, 2000; Jonassen, 2000). Active student engagement,
inquiry, problem solving, and collaboration are some of the instructional outcomes of
this learning theory. In constructivism, “correct” answers are de-emphasized, with the
teacher becoming more of a guide or facilitator. Constructivists also maintain that the
constructivist model produces more internalized thinking and consequently deeper
understanding than traditional methods. Constructivism encourages multiple
responses, knowing that the structure of the learning environment will provide feedback
to the solution that fits the problem the best.
The role of elaboration is important in both constructivism and cognitive theory.
Being able to connect prior knowledge structures and elaborate on them is shown to be
essential for new learning. Constructivists acknowledge the importance of prior
knowledge and attaching it to various ideas and situations in order to make meaning.
Studeis in which researchers increase elaborations, both in number and type, improved
the efficacy of learning (Gagne, Yekovich, & Yekovich, 1993).
Cognitive theory also stresses the importance of using examples in developing a
student’s schema formation, since it is difficult to form schemas from an abstract
definition. A schema is an internal knowledge structure. New information is compared
to existing cognitive structures called schema. Schema may be combined, extended or
altered to accommodate new information. Therefore, simply telling students about a
42
concept or idea is not enough to solidify learning, since schema production must be
individually constructed within a student’s mind. Constructivists use this approach to
learning as well, encouraging the student to build his or her own schema based upon
connections made between new and prior knowledge.
Mayer (2002, 2003) uses cognitive learning theory as a framework for his
multimedia learning principles. Cognitive learning theory helps explain learning in the
following ways:
•
Human memory has two channels for processing information: visual and
auditory.
•
Human memory has a limited capacity for processing information.
•
Learning occurs by active processing in the memory system.
•
New knowledge and skills must be retrieved from long-term memory for
transfer to the job.
Therefore, the theoretical constructs of constructivism and cognitive learning
theory is the cohesive framework which supports the use of Mayer's multimedia
learning principles.
Additional Research on the Effects of Multimedia in Online Learning
New studies are emerging that support the use of online learning elements. For
instance, Carswell, Thomas, Petre, Price, and Richards (2000) found that using email and
newsgroups versus using regular mail and telephone in a distance-learning course
resulted in comparable learning outcomes. Gretes and Green (2000) found that
augmenting a lecture course with online practice quizzes resulted in better performance
43
on examinations. Bork (2001) also suggests that engaging computer-based and
computer-mediated interactions facilitates learning. Herrington and Oliver (1999)
observed higher-order thinking in students’ talk when using an interactive multimedia
program. A study comparing learning outcomes of online multimedia and lecture
versions of an introductory computing course found that the online students
outperformed the lecture students in applied-conceptual learning (Kekkonen-Moneta &
Moneta, 2002). Another study (Kettanurak, Ramamurthy, & Haseman, 2001) found that
interactivity positively influenced learner attitudes, which enhanced learner
performance. Frear and Hirschbuhl (1999) observed that the use of interactive
multimedia enhanced problem-solving skills. McFarland (1996) also concurred with
these studies, concluding that the proper use of multimedia can enhance learning.
Research is consistently demonstrating that people learn more deeply from
words and pictures than from words alone. In various studies, researchers compared the
test performance of students who learned from animation and narration against those
who learned from narration alone or text and illustration alone (Mayer, 1989; Mayer &
Anderson, 1991; Mayer, Bove, Bryman, Mars, & Tapangco, 1996; Mayer & Gallini, 1990;
R. Moreno & Mayer, 2002). In all of the above studies, students who experienced a
multimedia-enhanced lesson of words and pictures performed better on a subsequent
transfer test than students who received the same instruction only in words. These
research studies support Mayer’s multimedia effect—that people learn more deeply
from words and graphics than from only words.
These studies provide strong evidence of the positive effects of multimediaenhanced learning. Supplemented by what we now know about multimedia learning
44
through Mayer’s research, distance learning theory, dual coding theory, learning
styles, computer self-efficacy, motivation principles, instructional interaction, and
cognitive learning theory, it is possible to envision the development of course lessons
that would include research-based multimedia elements to improve student learning in
an online environment.
45
CHAPTER 3: METHODS AND PROCEDURES
Research Questions and Hypotheses Statements
The availability of sophisticated multimedia software that is highly interactive
has offered many options for creating engaging and sophisticated learning content and
environments. But how should this software be used, when should it be used, and what
are the potential benefits to student learning? This is an issue that is important to
address, since multimedia instruction is costly and time-consuming to produce.
Therefore, this research is focused on analyzing the effectiveness of multimediaenhanced instruction by asking the following questions:
•
Does the inclusion of research-based multimedia-enhanced instruction have any
significant effect on student learning in an online learning environment
compared to student learning in an online learning environment without the
multimedia-enhanced instruction?
•
Would learning be significantly affected by a student’s visual/verbal learning
preference, computer self-efficacy, and/or experience with database software?
Based on Mayer’s underlying multimedia learning theory (2003) and research on
the effects of multimedia on learning, it would be expected that students would perform
better after interacting with research-based multimedia components than students who
have not experienced this type of instruction. Therefore, the directional hypothesis for
the first question is as follows:
46
•
There will be significant improvement in student learning in an online learning
environment using research-based multimedia-enhanced instruction compared
to student learning in the same online environment without the multimediaenhanced instruction.
The null hypothesis for the second question is:
•
There will be no significant differences in learning outcomes based upon a
student’s visual/verbal learning preference, computer self-efficacy, and
experience with database software.
Research Design
The researcher used a pre-test, post-test control group design, with participants
randomly assigned to the experimental and control groups. Both the experimental and
control groups were given the pretest, the experimental group was given the treatment,
while the control group was not given the treatment. Then, both groups were given the
post-test. This design was chosen since it is considered excellent for controlling the
threats to internal validity (Gall, Gall, & Borg, 2003). Testing effects are thus controlled,
because the experimental and control groups take the same tests. If the experimental
group performs better on the post-test, this result cannot be attributed to pre-testing,
because both groups had the same pre-testing experience.
Participants
A sample of 60 undergraduate students was used. Participants were enrolled in
four sections of an introductory educational technology course during the fall of 2005
and spring of 2006. The distribution of males to females was 23.3% males and 76.7%
47
females, a typical representation of the gender mix of education students at Boise
State. Gender percentages between the two groups were almost the same and a Pearson
chi-square indicated no significance in gender distribution between the groups: X2(1) =
.373, p = .542. Additionally, Cramer's V was .079, close to the lower limit of zero. This
indicates that the association between the type of instruction and gender is extremely
weak.
The age range of students was very similar between groups, with the vast
majority of students falling in the range of between 20 and 30 years of age. In the control
(No MM) group, this age range comprised 70% of students, while the experimental or
multimedia (MM) group comprised 73.3%. Again, Pearson chi-square statistics showed
no significant differences between the two groups based upon age: X2 (4) = .357, p =
.986). Cramer's V was .077.
48
80
60
40
Percent
20
0
<20
20-30
31-40
41-50
Age range
Figure 4. Age range distribution of participants (N=60).
When comparing the following variables using Pearson's chi-square, no
significances between the experimental and control groups were detected in any of the
following: (1) experience with Microsoft Access, (2) experience with the following
programs: word processing, spreadsheets, databases, presentation software, statistical
software, desktop publishing software, and multimedia software, (3) owning a
computer, (4) access to a computer away from home, (5) taking a computer training
course, and (6) access to high speed Internet at home.
49
Treatment
The treatment and duration of the instruction and multimedia lessons were
identical during both semesters. Student anonymity was guaranteed, with consent for
the research being strictly voluntary. An IRB exemption certificate was received for this
study. To provide instructional equity, all students had access to the multimedia
treatment immediately after the post-test was administered. All participants experienced
instruction from the researcher.
The multimedia lessons were identical with instruction in the textbook,
Curricular Computing (Pollard, VanDehey, & Pollard, 2005), on database skills, an area
that has been traditionally difficult for students to comprehend. Students in the control
group used the textbook only while students in the experimental group were instructed
to interact with the multimedia lessons. The students in the experimental group,
however, also had access to the textbook.
Although the participants were randomized, it is also important to note that this
research was conducted in an educational setting. Therefore, other variables were
considered in evaluating the effectiveness of instruction, such as learning styles,
experience with database software, and each student’s level of computer-self efficacy.
These additional variables will serve to strengthen the interpretations of the study and
also to direct additional research.
This research was quantitative in terms of data and data design. As Gall, Gall,
and Borg tell us, “Different researchers make different epistemological assumptions
about the nature of scientific knowledge and how to acquire it” (2003, p. 23). Therefore,
this research is grounded in the assumption that “features of the social environment
50
constitute an independent reality and are relatively constant across time and settings”
(Gall, Gall, & Borg, 2003, p. 23). This type of research is grounded in a positivist
perspective, where reality is considered objective and unchangeable. The main type of
research knowledge made available by this study was improvement, since the
effectiveness of an instructional intervention, namely research-based multimediaenhanced online instruction in an online course, was evaluated. Therefore, the focus of
this research was on improving instruction, with student learning being the desired
outcome.
Instruments
Pre- and Post-Tests
A pre-test was administered to all students in order to confirm independence of
samples, which is an assumption of randomization, as well as determine the amount of
improvement in student scores. A post-test was administered to all students after the
multimedia-enhanced instruction was given to the experimental group. Both tests were
timed and taken on computers using the testing module of the course management
system in Blackboard. All students were experienced with the testing format of
Blackboard and were informed that the test questions would not count toward their
grade in the course. Test questions on pre- and post-tests were identical. Test answers
were not revealed on the pre-test.
The test questions were derived from a pool of questions about database skills
from the Educator’s Technology Assessment (ETA). This assessment measures computer
competencies, which have been reviewed in each region of the state by teams of
51
educators. The competencies have also been articulated with International Society of
Technology in Education (ISTE) standards as directed by the State Board of Education.
The test has been given to over 20,000 participants.
Unfortunately, the reliability and validity scores could not be released to the
researcher, as was originally planned and expected at the inception of this research
project. This problem required that the test questions selected from the ETA pool for this
research undergo internal statistical evaluations. It was found that the test questions
have adequate internal reliability, with a Cronbach alpha of .789, N = 30. Additionally,
Tukey's test of non-additivity, which tests the null hypothesis that there is no
multiplicative interaction between the cases and the items, was not significant. This
confirms that each item on the test is linearly related to the total score.
Validity of the measuring instrument was another critical component of this
study. Validity is the degree to which a test measures what it is supposed to measure,
and therefore allows appropriate measurement of scores. Content validity is the degree
to which a test measures an intended content area. In order to demonstrate content
validity, test items were verified as matching content presented both in the multimediaenhanced instruction and in the traditional textbook instruction. This type of alignment
with course materials and testing is coined “backloading” by Fenwick English (1992)
and was the model used to confirm content validity for the pre- and post-tests.
Besides pre- and post-tests on learning, additional surveys were administered to
collect information on computer self-efficacy and student learning styles in the
Visual/Verbal continuum. A detail of these two surveys and confirmation of the
measures of internal reliability and construct validity follows.
52
Computer User Self-Efficacy Survey
Finding a reliable and valid instrument to measure computer self-efficacy was a
challenge, since there are many scales available. For instance, the Computer Attitude
Scale (CAS) was considered, which includes a 10-items Computer Confidence sub-scale
(Lloyd & Gressard, 1984). Another instrument, the Computer Technologies Survey
includes a comprehensive 46-item sub-scale measuring self-efficacy in relation to specific
computer technologies, such as word processing, email, and print functions (Kinzie,
Delcourt & Powers, 1994). Compeau and Higgins developed a 10-item scale for general
computer use in the context of completing a task (1995).
The scale most appropriate for this research is the Computer User Self-Efficacy
(CUSE) scale (Cassidy & Eachus, 2002). This survey consists of a 30-item scale where
students are required to indicate their level of agreement or disagreement to each
statement corresponding with a 6-point Likert scale. The items on the survey are of a
general yet domain-specific (relating to computers) nature, e.g., “I consider myself to be
a skilled computer user.” The possibility of affirmation bias is controlled by wording
half of the statements in a negative manner so that a “disagree” response was needed to
add positively to the composite self-efficacy score. A student could score a maximum of
180 on this scale.
The reason that this scale is the best one for this study is that results indicated
high reliability and validity. Internal reliability of the first part of the survey, which asks
questions about computer experience and familiarity with computer software
applications, was high with Cronbach’s alpha measuring .94. This high value is a
demonstration of homogeneity within items, or that a single construct was measured by
53
each item. The construct validity of this part of the survey was demonstrated by
significant positive correlations between computer self-efficacy and both computer
experience (r = .55, p<.001) and familiarity with software packages (r = .53, p <.001).
Previous studies (Busch, 1995; Decker, 1998; Hill, Smith, & Mann, 1987; Koul & Rubba,
1999) have supported convergence between computer self-efficacy and these above
variables. Cassidy and Eachus’s findings support the notion that the instrument
measures what it purports to measure: computer self-efficacy.
Cassidy and Eachus's research indicated that the internal reliability of the second
part of the survey, the 30-item scale, was also high (alpha = .97, N = 184). Test-retest
reliability over a one-month period was high and statistically significant (r = .86, N = 74,
p<.0005).
Construct validity of part two of the survey was assessed by Cassidy and Eachus,
correlating the self-efficacy scores with a self-reported measure of computer experience
(experience) with number of computer software applications used (familiarity). Both
correlations were significant, with experience correlated at r = .79, p<.0005, N = 212 and
familiarity correlated at r = .75, p<.0005, N = 210. A sample of the survey given to the
participants in this study can be found in Appendix A.
Learning Styles Survey
For this study, Felder’s Index of Learning Styles (ILS)
(http://www.engr.ncsu.edu/learningstyles/ilsweb.html) was chosen for its analyses of
reliability and validity and ease of administration (Felder & Spurlin, 2005). Several
analyses of the ILS have been published (Felder & Spurlin, 2005; Litzinger, Lee, Wise, &
54
Felder, 2005; Livesay, Dee, Nauman, & L. S. Hites, 2002; Seery, Gaughran, &
Waldmann, 2003; Spurlin, 2002; van Zwanenberg, Wilkinson, & Anderson, 2000; Zywno,
2003). There are over 500,000 hits per year on this online survey, which has been
translated into Spanish, Portuguese, Italian, German, and several other languages. Testretest correlation coefficients for all four scales of the instrument varied between .7 and
.9 for an interval of four weeks between test administrations and between .5 and .8 for
intervals of seven months and eight months. All coefficients were significant at the .05
level or better.
Cronbach alpha coefficients were all greater than the criterion value of .5 for
attitude surveys in three of four studies (Livesay, Dee, Nauman, & L. S. Hites, 2002;
Spurlin, 2002; van Zwanenberg, Wilkinson, & Anderson, 2000) and were greater than
that value for all but the sequential-global dimension in the fourth study (Zywno, 2003).
Construct validity was ascertained through a consistent pattern of learning style
preferences of engineering students at ten universities in four English-speaking
countries. Zwyno (2003) concluded that the reliability and validity data justified using
the ILS for assessing learning styles, although Zwyno also recommended continuing
research on the instrument.
The Visual/Verbal continuum of the ILS was evaluated in relationship to student
scores or learning outcomes, since this correlates most closely to the underlying theories
of multimedia learning and dual-coding theory. Although one needs to take into
consideration that learning style profiles only suggest behavioral tendencies and that
they are not static, the additional information gained by examining learning style
55
tendencies can add potential insights into learning style trends and how they might
apply to multimedia learning. A copy of this survey can be found in Appendix B.
Data Collection
Data were collected in various ways for this research. Pre- and post-test data
were collected through the Blackboard course management system and downloaded to
comma-delimited files and imported into SPSS and Microsoft Excel statistical software.
This not only guaranteed that the actual participants took the tests, but also the accuracy
of the results.
The Computer-User Efficacy (CUSE) survey was taken via Zoomerang
(http://zoomerang.com), online survey software that maintains data in a secure format
and is available for download to spreadsheet software. Again, accuracy of data is
confirmed because it is collected electronically and imported into data analysis software.
Students took the Index of Learning Styles (ILS) online form and printed the results,
giving them to the researcher.
It is very tempting, when working with numbers and statistics on a computer to
automatically "trust" these numbers. This false sense of trust makes it even more
imperative to double check individual records and to ascertain if possible errors in either
data entry or data collection could be present. For this study, individual records were
reviewed and double-checked against the import files and data received from the
Blackboard course management system.
56
Data Analyses
While errors of measurement are never totally eliminated, the researcher tried to
minimize them and their impact on this study. As explained above, all survey and test
instruments underwent a complete literature review and verification of statistical rigor
before use. Sampling error was reduced by randomization and evaluation of pre-test
score means between groups. Measurement errors were greatly minimized due to
collection of data from online instruments and direct importing of these data into
statistical software packages. Statistical analyses included the following:
(1) Descriptive statistics of the data, revealing frequencies, score means, and
important elements of the mean, such as standard deviation.
(2) Kolmogorov-Smirnov (K-S) test to verify normal distributions of pre- and
post-test means.
(3) An independent samples t-test of pre-test scores to confirm independence of
samples.
(4) An independent samples t-test to determine if there were any differences
between post-test and gain scores of the groups.
(5) A paired samples (dependent) t-test, which compared the pre- and post-test
scores of each instructional group.
(6) A correlation of pre- and post-test scores to CUSE scores, to determine if
there is any relationship to test scores and CUSE scores.
(7) Analyses of variances (ANOVAs) on additional variables, to determine if
there were any significant differences between the groups compared to test
scores.
57
(8) Paired samples t-tests of pre- and post-test scores within group of the
additional variables, to determine if significant learning occurred within
groups.
(9) A multiple regression analysis, using post-test scores as the outcome
(dependent variable) and instructional method (No MM and MM groups),
CUSE rankings, experience with database software, and visual/verbal
learning preferences as predictors (variables).
A statistical significance of .05 or less was the baseline marker for statistical
significance in this study. Practical significance was measured using effect size indices,
represented by Cohen's d. The effect size can determine the degree by which the means
differ (or how much of the difference was explained). Values of .2, .5 and .8 are
generally considered small, medium and large effect sizes.
58
CHAPTER 4: RESULTS
Introduction
Up until now, the researcher has provided reasons for the study, a review of
literature, and a plan for analyses of data. And now, like a mystery novel unfolding, the
evidence is presented and the suspense, at least to the researcher, begins to dissolve. It is
in this chapter where the fun really begins, where the results of the data analyses are
revealed. Was the treatment effective? Were other variables responsible for possible
differences in test scores? What other surprises might appear? Therefore, in this chapter,
the researcher will perform the planned data analyses as described in Chapter Three,
tying them to specific questions when applicable, and setting the stage for the ensuing
interpretations of the data in Chapter Five.
Distribution of Data
Pre- and post-test were initially analyzed to determine normal data distributions
in order to justify using parametric tests. When comparing pre- and post-test scores,
normal distributions of data were confirmed in both groups, with the exception of posttest scores in the experimental (MM) group. However, the Kolmogorov-Smirnov (K-S)
statistic still does not tell us whether this distribution is large enough to be important—it
only tells us that the deviations from the mean are significant. The significant K-S
statistic for post-test scores in the MM Group likely reflects the bimodal distribution
found in post-test scores. As explained next, higher standard deviations in the post-test
59
scores of the MM Group as compared to the control (No MM) group may be the cause
of this tendency toward non-normal data. Also, since the sample number is rather small
(60), there is a higher chance of having non-normal data.
Table 2
Tests of Normality by Instructional Groups
Kolmogorov-Smirnov(a)
Shapiro-Wilk
Statistic
df
Sig. Statistic df Sig.
No MM* Post-test
.110 30
.200
.939 30 .088
Pre-test
.136 30
.163
.960 30 .308
MM*
Post-test
.192 30
.006
.930 30 .050
Pre-test
.091 30
.200
.977 30 .741
* Multimedia (experimental) Group
* No Multimedia (control) Group
When examining descriptive data, it was noted that the MM Group had a higher
standard deviation (12.64) than the No MM Group (10.306). As we know, the standard
deviation is a measure of how well the mean represents the data. Small standard
deviations (relative to the value of the mean) indicate that the data points are close to the
mean. A large standard deviation (relative to the mean) indicates that the data points are
distant from the mean, or that the mean is not an accurate representation of the data
(Table 3).
60
Table 3
Means and Standard Deviations of No MM and MM Groups
Type of
Instruction
Post-test
No MM*
MM**
Pre-test
No MM
MM
* No MM = Control Group
**MM = Multimedia Group
N
30
30
30
30
Mean
83.00
81.13
73.67
67.07
Std.
Deviation
10.306
12.640
14.898
14.678
Std.
Error
Mean
1.882
2.308
2.720
2.680
When examining the large standard deviation in the post-test scores of the MM
group, it was revealed that five of the 30 participants in the MM group had a 40% or
higher score gain, while three of the MM group participants had an 85% or greater score
decrease. This large standard deviation could be the result of confounding variables,
such as lack of motivation in scoring well on the pre- and/or post-test, student guessing,
or student error in taking the test.
Normal data distributions are important in analyses and interpretations of data.
In the case of this study, large standard deviations might tend to reduce the credibility
of the researcher in determining significance. Removing outliers is one way to reduce
the large standard deviations. When removing six outliers, three from each group, the
standard deviations became smaller. However, the researcher examined each of these six
records and determined that the information entered was accurate and should remain
an essential and important part of the dataset. In fact, sometimes the outliers can be
more interesting than the rest of the data. The researcher argues that although the data
appears non-normal in the post-test scores of the MM Group, this distribution may have
61
been caused by the large standard deviations, as well as the tendency for exam scores
to be multi-modal in nature. Therefore, it was decided to include all of the collected
data, using non-parametric tests in addition to parametric tests when analyzing test
score data.
Independence of Samples
Were the two samples independent? Although the participants were randomized
into two samples, the researcher was able to provide additional verification that the two
samples were equal. The two groups were analyzed using an independent samples t-test
on the pre-test scores. This test supports the null hypothesis that samples were equal: [t
(58) = 1.728, p = .089, d = .45]. Levene's non-significant Test of Equality of Variances
supported the assumption of equal variances necessary for the test.
Learning Outcomes between Groups
Did one group do better than the other? Although comparing post-test scores
showed no significance [t (58) = .627, p = .533, d = .16], with a non-significant Levene's
test, a more accurate representation of learning might be represented by comparing gain
scores. In comparing gain scores, the MM Group (N = 30, M = 14.37, SD = 19.36) had a
higher gain score than the No MM Group (N = 30, M = 9.33, SD = 13.855). However,
high standard deviations present a problem in interpreting means.
To determine if this gain score difference between groups was significant, an
independent samples t-test was conducted [t (58) = 1.089, p = .281, d = .29], which
indicated no significance in gain scores between the MM and No MM groups. However,
a significant Levene's test (p = .03) was noted, indicating a potential non-normality of
62
data. This might be attributed to the large standard deviations of the gain scores.
Therefore, a nonparametric version of this test (Mann-Whitney U) was run to determine
significance. This test also confirmed the non-significance of gain scores between groups
(N = 60, U = 398, p = .441).
Learning Outcomes within Groups
Did each group (MM and No MM) learn? A paired samples (dependent) t-test
was run to determine any significant differences between post- and pre-test scores. Both
groups, the No MM [t (29) = 3.690, p = .001, d = .67] and the MM [t (29) = 3.978, p = .000,
d = .72], showed significant gains. However, large standard deviations may have
affected these results. Therefore, the researcher took out four extreme values from each
group and performed the same test. The results from this dependent samples t-test also
revealed statistical significance in each group: No MM [t (25) = 3.320, p = .003, d = .66]
and MM [t (25) = 3.889, p = .001, d = .77].
To address the possibility of non-normal data in the post-test scores of the MM
Group, a non-parametric (Wilcoxon Signed-Rank) test was also run on each group, to
confirm the results of the paired samples t-test. This test also showed significance in test
scores gains within each group: MM [z = 3.306, p = .001] and No MM [z = 3.438, p = .001].
Computer User Self-Efficacy (CUSE) Analyses
Computer User Self-Efficacy (CUSE) rankings were categorized as: (1) Very Low
(< 96 points), (2) Low (96 – 111 points), (3) Average (112 – 127 points), (4) Above
Average (128 – 143 points), and (5) High (144 – 160 points). Data were stored in both a
raw score format and ranked, to enable different interpretations and statistical uses. The
63
highest score that a student could obtain on the test was 160 points. The higher the
score, the higher would be the level of a student's computer self-efficacy. To verify
underlying assumptions of means tests, CUSE scores were found to be normally
distributed, with a non-significant Kolmogorov-Smirnov value.
Gain Scores across CUSE Groups
Would a student's computer self-efficacy correspond to test score gains? When
running a paired (dependent) samples t-test using post- and pre-test scores as
dependent variables across each of the CUSE groups, significance between the two test
scores was apparent in all groups except the very low and high groups (Table 4).
64
Table 4
Dependent Samples t-test on CUSE Groups
CUSE
Group
Paired Differences
Gain
Std.
Score
Std.
Error
Mean Deviation Mean
Very
Low
Pair Post1
test Pretest
Low
Pair Post1
test Pretest
Average Pair Post1
test Pretest
Above
Pair PostAverage 1
test Pretest
High
Pair Post1
test Pretest
95%
Confidence
Interval of the
Difference
Lower
Upper
8.16
.75
8.860
3.132
-6.66
8.79
11.389
3.044
14.57
18.404
17.55
4.50
t
.239
Sig.
(2df tailed)
7
.818
2.21
15.36 2.886 13
.013
4.919
3.94
25.20 2.962 13
.011
20.328
4.545
8.04
27.06 3.861 19
.001
8.226
4.113
-8.59
17.59 1.094
.354
3
Since this test involved test scores, a non-parametric Wilcoxon Signed Ranks test
was conducted, with the same corresponding significances (Table 5).
65
Table 5
Wilcoxon Signed Ranks Test of CUSE Groups and Gain Scores
CUSE Group
Very Low
Z
Asymp. Sig. (2tailed)
Low
Z
Asymp. Sig. (2tailed)
Average
Z
Asymp. Sig. (2tailed)
Above Average Z
Asymp. Sig. (2tailed)
High
Z
Asymp. Sig. (2tailed)
a Based on positive ranks.
b Wilcoxon Signed Ranks Test
Pre-test Post-test
-.211(a)
.833
-2.368(a)
.018
-2.137(a)
.033
-3.297(a)
.001
-.921(a)
.357
The MM Group had a higher percentage of participants (10%) who scored High
in the CUSE scale (144 to 160 points), as compared to the No MM Group percentage of
3.3 in the High category. In the Above Average category (128 to 143 points), the No MM
Group had a higher percentage (40%) as compared to the MM Group (26.7%). However,
when looking at the trends of pre- and post-test mean scores to the CUSE categories, the
scores did not follow a completely ascending pattern. The Above Average Group had a
mean post-test score of 85.40, while the High Group had a mean score of 81.75. The Very
Low Group had a very small score increase from the pre- to post-test scores as compared
to the other groups (Table 6).
66
Table 6
Post-Test and Pre-Test Scores Arranged by CUSE Rankings
CUSE
Groups
Very Low
< 96 points
Low
96 – 111
points
Average
112 – 127
points
Above
Average
128 – 143
points
High
144 – 160
points
Total
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Post-test
75.63
8
Pre-test
74.88
8
15.910
11.692
81.57
14
72.79
14
11.869
11.362
81.57
14
67.00
14
10.120
17.949
85.40
20
67.85
20
10.640
16.809
81.75
4
77.25
4
6.292
11.927
82.07
60
70.37
60
11.473
15.036
A breakdown of CUSE ranked groups (Table 7) according to each type of
instruction (MM or No MM) also provided further information for this research.
67
Table 7
Pre- and Post-Test Scores Categorized by Type of Instruction and CUSE Groups
CUSE
Groups
Very Low
Low
Average
Above
Average
High
Total
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
Mean
N
Std.
Deviation
MM
Pretest
70.80
5
No
MM
Pretest
74.33
3
12.503
16.947
14.012
11.862
82.63
8
80.17
6
74.50
8
70.50
6
13.876
9.621
13.501
8.337
83.00
6
80.50
8
71.50
6
63.63
8
6.481
12.536
21.135
15.775
82.67
89.50
72.42
61.00
12
8
12
8
10.360
10.323
13.840
19.405
88.00
1
79.67
3
93.00
1
72.00
3
.
5.774
.
6.928
83.00
30
81.13
30
73.67
30
67.07
30
10.306
12.640
14.898
14.678
No MM
Posttest
83.67
3
MM
Posttest
75.20
5
Were there significant differences in post-test scores between the CUSE groups?
To answer this question, a one-way ANOVA was run, using post-test scores as the
dependent variable and CUSE rank as the independent variable. No variability of posttest mean scores was found between the CUSE groups [F (4, 55) = 1.072, p = .379].
Levene's test was non-significant (.498), indicating equal variances. Again, to confirm
68
these parametric results, a Kruskal-Wallis nonparametric test was run to confirm
nonsignificance [X 2 (4) = 3.367, p = .498].
Were there any differences in gain scores between the CUSE groups within each
instructional (No MM/MM) group? A one-way ANOVA was run, using gain scores as
the dependent variable and CUSE rank as the independent variable, split by
instructional groups. Homogeneity of variances was confirmed by a nonsignificant
Levene’s test for each instructional group. Variability of gain scores was found in the
MM Group [F (4. 25) = 3.188, p = .03]. A nonparametric Kruskal-Wallis confirmed
significance [X 2 (4) = 9.841, p = .043] in the MM Group. Additionally a post hoc Sheffé
test indicated significant differences in gain scores means between the Above Average
and Very Low groups (p = .043) in the MM Group. The significance level of the Sheffé
test is designed to allow all possible linear combinations of group means to be tested,
not just pairwise comparisons. The result is that the Sheffé test is often more
conservative than other tests, which means that a larger difference between means is
required for significance. The least significant difference (LSD) pairwise multiple
comparison test was also significant for the gain score means in the MM Group between
the Above Average and Very Low groups (p = .002) and also between the Average and
Very Low groups (p = .037).
When looking at gain scores between instructional groups (No MM/MM) and
CUSE groups, it appeared that there might be a significant interaction (Figure 5).
However, when running a 2 x 5 ANOVA, no significance was indicated for interaction of
CUSE groups and instructional groups [F (4) = 1.545, p = .203]. Post hoc tests also did not
indicate significance.
69
35
30
25
20
15
No MM
MM
10
5
0
Very Low
Low
Average
Above Average
High
-5
-10
Figure 5. CUSE groups and gain scores by instructional groups.
An independent samples t-test was then run to determine if there were any
differences in gain scores of each CUSE group compared to instruction. Significance in
gain score differences was noted between the No MM and MM groups of the Very Low
[t (6) = 3.291, p = .017] = and Above Average [t (18) = 2.145, p = .046] groups. Levene’s
test was nonsignificant for all groups, indicating homogeneity of variances.
Visual and Verbal Learning Styles Analyses
Did any relationships exist between learning styles and learning outcomes?
When exploring the distribution of the five categories (Balanced, Moderate Verbal,
70
Moderate Visual, Strong Verbal, and Strong Visual) divided by groups, it was noted
that Moderate Verbal learners constituted 20% of the MM Group and 10% of the No MM
group. Strong Verbal learners were the least represented of the learning styles group,
constituting only 3.3% of each group. Strong Visual learners came in second to Balanced
learners in the No MM group, making up 20% of that group. In comparison, Strong
Visual learners constituted 13.3% of the MM Group, coming in fourth after Balanced,
Moderate Verbal, and Moderate Visual. A visual of the categories of learners in each
group is shown in Figures 6 and 7.
60
50
40
30
20
Percent
10
0
Balanced
ModV RB
ModV IS
StrongV RB
StrongV IS
Vis/Vrb Rank
Figure 6. Percentages of learning styles in Visual/Verbal continuum for No MM group.
71
50
40
30
20
Percent
10
0
Balanced
ModV RB
ModV IS
StrongV RB
StrongV IS
Vis/Vrb Rank
Figure 7. Percentages of learning styles for MM group.
Was there a difference in learning styles compared to post-test scores? A oneway ANOVA was run, using post-test scores as the dependent variable and learning
styles as the independent variable. Levene's statistic indicated normal distribution of
data (.238). While the ANOVA indicated non-significance [F (4, 55) = 1.696, p = .164], an
LSD post hoc statistic showed a significant difference in post-test scores between the
Moderate Verbal and Strong Visual groups (p = .016), with the Moderate Verbal having
a mean increase in post-test scores of 12.33 when compared to the Strong Visual group.
However, since this was an analysis of post-test scores, a non-parametric version of a
one-way ANOVA was conducted (Kruskal-Wallis), which indicated non-significance (X2
72
= 8.817, p = .0660). However, the mean ranks output of this test indicated that the
Moderate Verbal learners scored highest (Table 8).
Table 8
Mean Ranks of Visual/Verbal Preferences Compared to Post-Test Scores
Post-test
Vis/Vrb Rank
Balanced
ModVRB
ModVIS
StrongVRB
StrongVIS
Total
N
29
9
9
2
10
59
Mean
Rank
32.16
39.78
25.00
34.50
18.55
Were there any differences in test score gains within each of the learning styles
groups? A paired samples t-test was conducted, using post- and pre-test scores as the
paired samples and splitting the data file by learning style groups. Statistical
significance was found in the Balanced [t (28) = 4.190, p = .000, d = .79] and Moderate
Verbal groups [t (8) = 4.906, p = .001, d = 1.79]. This significance was verified by a
significant Wilcoxon Signed Ranks test, Balanced [z = 3.524, p = .000] and Moderate
Verbal [z = 2.677, p = .007].
Experience with Microsoft Access
Did database experience groups differ from one another when compared to posttest scores? This question was answered by performing another one-way ANOVA, with
post-test scores as the dependent variable and experience with Microsoft Access as the
independent variable. (Table 9).
73
Table 9
Test Scores Categorized by Experience with Microsoft Access
Experience with Microsoft Access
Post-test
None
Pretty Good
Good
Total
Pre-test
None
Pretty Good
Good
Total
N Mean Std. Deviation Std. Error
39 82.21
10.964
1.756
19 81.42
13.234
3.036
2 85.50
3.536
2.500
60 82.07
11.473
1.481
39 69.85
14.190
2.272
19 69.74
16.772
3.848
2 86.50
9.192
6.500
60 70.37
15.036
1.941
There was no statistical significance [F (2, 57) = .119, p = .888], therefore
variability of post-test means between the groups was not larger than expected.
Additionally, Levene's Test was non-significant (.265), indicating homogeneity of
variances, an important assumption of the test. Pre-test scores were also analyzed
between groups, with no significance as well [F (2, 57) = 1.119, p = .309].
As an additional safeguard, a Kruskal-Wallis test also indicated non-significance
between groups in post-test score means (X2 = .167, p = .920). The ranked means chart
also provides helpful information (Table 10).
74
Table 10
Mean Ranks of Experience with Microsoft Access to Post-Test Scores
Post-test
Access
Exp
None
Pretty
Good
Good
Total
39
Mean
Rank
30.74
19
29.58
2
60
34.50
N
Was there a statistical difference between the pre- and post-test scores of each
database experience group? When performing a paired-samples t-test, using pre- and
post-test scores as dependent pairs and splitting the data file by database experience
groups, the independent variable, both the "None" [t (38) = 4.52, p < .005, d = .72] and
"Pretty Good" [t (18) = 2.962, p < .05, d = .70] categories showed statistical and practical
significance. A non-parametric version (Wilcoxon Signed Ranks) of this test was run to
confirm significance (Table 11).
Table 11
Experience with Microsoft Access across Gain Scores
Experience
with Microsoft Wilcoxon
Access
Signed Ranks
None
Z
Asymp. Sig.
(2-tailed)
Pretty Good
Z
Asymp. Sig.
(2-tailed)
Post-test
minus
Pre-test
3.840
.000
2.774
.006
75
Was there a statistical difference between the pre- and post-test scores of each
database group when divided by instructional groups (No MM and MM)? Test score
gains of the “None” and “Pretty Good” database experience groups were analyzed, with
statistical significance noted in the “None” [t (17) = 2.867, p = .011, d = .695] and “Pretty
Good” [t (10) = 2.448, p = .034, d = .77] categories of the No MM Group and “None” [t
(20) = 3.499, p = .002, d = .78] of the MM Group. To confirm these findings, a nonparametric Wilcoxon Signed Ranks test was run. Significance was found only in the
“None” category of both the MM (z = 2.784, p = .005) and No MM (z = 2.709, p = .007)
groups.
Interaction Effects of CUSE Rankings, ILS Groups, Experience with Database
Software, and Instructional Groups on Gain Scores
Were there any interaction effects between CUSE rankings, visual/verbal
learning preferences, experience with database software, and instructional groups (No
MM/MM) when compared to gain scores? When conducting an ANOVA with the
above variables, significant interactions were noted between CUSE rankings,
visual/verbal learning preferences, and instructional groups [F (3, 59) = 3.472, p = .037,
R2 = .354]. However, it should be noted that the effect size of this interaction is not
practically significant. The interaction of CUSE rankings and experience with database
software is close to a significant value, but again lacks practical significance [F (2, 59) =
3.518, p = .05, R2 = .27].
76
Correlations between CUSE Scores, Visual/Verbal Learning Preferences, Experience
with Database Software, and Pre- and Post-Test Scores
Were there any significant correlations between CUSE rankings, visual/verbal
learning preferences, experience with database software, and pre- and post-test scores?
Multiple correlations were run on the above variables, with a statistically significant
correlation appearing between experience with database software and CUSE raw scores
(N = 34, R = .522, p < .01). As experience with database software increased, so did CUSE
raw scores.
Predicting Post-Test Scores using Regression Analyses
Multimedia-enhanced instruction was predicted to have some effect on learning.
Also, other variables were considered to be important and possibly having some effect,
such as computer self-efficacy, visual/verbal learning styles, and/or background
knowledge of database software. This leads to another question: Would the type of
instruction (No MM or MM), learning style, computer self-efficacy, and/or background
knowledge of database software of a student be a predictor or predictors for post-test
scores? For this part of the analysis, a multiple regression was used, in order to predict
the outcomes of post-test scores (dependent variable) from the following predictors
(independent variables): type of instruction (No MM or MM), CUSE rankings,
experience with database software, and visual/verbal learning preferences. This analysis
is important because of its ability to infer predictors of post-test scores.
Two models were used in this regression analysis, with type of instruction (MM
or No MM) being in the first model and CUSE rankings, experiences with database
77
software, and visual/verbal learning preferences in the second model. When
performing a multiple regression, using post-test scores as the outcome and the above
predictors, it was found that the type of instruction only accounted for 1.7% of the
variance in post-test scores. The inclusion of the other predictors brought this percentage
only up to 12.8%. To confirm the assumption of independence of errors, the DurbinWatson statistic was 2.243 and is reasonable, since it is close to two (Field, 2003).
Table 12
Regression Statistics Details
Model
R
R
Square
Adjusted
R Square
Change Statistics
DurbinWatson
R
Square
F
Sig. F
Change Change df1 df2 Change
1
.130(a)
.017
.000
.017
.977
1 57
.327
2
.357(b)
.128
.063
.111
2.284
3 54
.089
1.769
a Predictors: (Constant), Type of Instruction
b Predictors: (Constant), Type of Instruction, CSE Rating, Vis/Vrb Rank, Access Exp
c Dependent Variable: Post-test
However, a regression analysis also offers additional helpful information.
Estimates for beta values inform us about the relationship between post-test scores and
each predictor. If the value is positive, then there is a positive relationship between the
predictor and the outcome. For instance, the CUSE ranking was positive (2.352), which
means that as a student's computer self-efficacy increases, so should test scores.
Regression analysis statistics also includes a chart of case-wise diagnostics, or a
list of case numbers whose standardized residuals are very different from their
predicted values. In the case of this analysis, no cases were identified. Additionally, the
assumption of no multicollinearity has been met since VIF and tolerance values are close
78
to one. This assumption is also confirmed by each predictor, which is distributed
across different dimensions.
High Speed Internet Access and Post-Test Scores in the MM Group
Although qualitative research is better known for its defining characteristic of
emergent data, one could also argue that quantitative research can also have this
characteristic. In quantitative research, the researcher defines most of the questions
and/or hypotheses ahead of time, creating the methodology, and detailing the analyses
of data that will reveal the answers. However, in the process of discovering the answers
to the questions, it is almost inevitable that the researcher will discover other questions
or other significances that were never originally conceived. This study is no exception.
For instance, when performing various statistical tests, the researcher discovered that
the availability of high speed Internet access might have some relationship to post-test
scores. The post-test scores of students who had high speed Internet appeared larger
than those who did not in the MM Group (Table 13).
Table 13
High Speed Internet Users Mean Scores by Instructional Groups
Group
High Speed Internet? N Mean Std. Deviation Std. Error Mean
No MM Post-test No
9 81.56
14.917
4.972
Yes
21 83.62
7.978
1.741
MM
Post-test No
8 71.63
10.676
3.775
Yes
22 84.59
11.648
2.483
Thus, the question arose, "Did access to high speed Internet account for higher
post-test scores within the MM Group?" Each group had almost the same percentage of
79
access to high speed Internet. Seventy percent of the No MM Group had high speed
access, while 73.3% of the MM Group had high speed access.
When running an independent samples t-test, using post-test scores as the
dependent variable and high speed Internet access as the grouping variable, the posttest scores of those in the MM Group were significantly higher for those who had high
speed Internet than those who did not [t (28) = 2.752, p < .05, d = 1.03]. The post-test
mean in the MM group for those who had high speed Internet was 84.59, SD = 11.648,
while the post-test mean in the MM group for those who did not have high speed
Internet was 71.63, SD = 10.67. Also, Levene's Test for Equality of Variances was nonsignificant (.634) for the MM post-test scores, confirming the equality of variances.
To confirm this significance, a non-parametric test (Mann-Whitney) was also run.
Significance was confirmed in this test (z = 2.949, p = .003).
In the next chapter, the researcher will discuss findings and conclusions,
bringing this research full circle, while asking more questions and providing ideas for
future research.
80
CHAPTER 5: CONCLUSIONS
Revisiting the Original Research Questions
While Chapter Four helps unravel the suspense involved in a research study,
Chapter Five brings it all back together, tries to make sense of it, and turns everything
into a cohesive, comprehensible, and captivating conclusion. By this time, the researcher
has a good understanding of the analysis and how it might be interpreted. Of course, it
is always important to revisit the original research questions:
1. Does the inclusion of research-based multimedia-enhanced instruction have
any significant effect on student learning in an online learning environment
compared to student learning in an online learning environment without the
multimedia-enhanced instruction?
2. Would learning be significantly affected by a student’s visual/verbal
learning preference, computer self-efficacy, and/or experience with database
software?
Conclusions
This research offered many insights and opportunities for more research. A list of
the conclusions is provided for ease of reading:
•
There was no difference in test scores between the MM Group and No MM
Group.
81
•
Both groups had significant gains from pre- to post-test scores, confirming
Mayer's multimedia principle.
•
Gain scores of students who were in the "Very Low" category of the computer
user self-efficacy (CUSE) scale did not increase significantly, confirming selfefficacy research.
•
Gain scores increased significantly between the Very Low and Above Average
CUSE groups and the Very Low and Average CUSE groups in the MM Group,
indicating that as a student’s computer self-efficacy increased, so did learning.
•
Gain scores were significantly higher for the MM Group than the No MM Group
in the Above Average CUSE ranking. The higher CUSE ranking might be a
helpful factor in a student’s success with multimedia instruction.
•
Lower-knowledge learners had significant improvements in their test scores,
confirming Mayer's individual differences principle.
•
Moderate to Strong Visual learners did not experience significant gain scores,
indicating the need to possibly align assessment with instruction.
•
Post-test scores of students who had high speed Internet in the MM Group were
significantly higher than the students in the group who did not have high speed
Internet.
A detailed discussion of these findings follows.
Question One
The researcher was very interested in comparing dynamic multimedia-enhanced
instruction to a more static multimedia, textbook-based instruction. In this research,
82
there was no difference in test scores between the MM and No MM groups. However,
each group had significant differences in scores between the pre- and post-tests. This
finding confirms Mayer's (2003) multimedia principle, which states that we learn better
from words and pictures than from words alone. Most likely, each group learned
because each group experienced carefully chosen words and pictures.
However, it was still anticipated that the MM Group would have possible higher
learning outcomes, due to the fact that some students may have responded more
positively to the dynamic or "high tech" multimedia instruction and/or a multiplicity of
other variables. Mayer (2003) states in his modality principle that multimedia
presentations involving both words and pictures should be created using auditory or
spoken words, rather than written text to accompany the pictures. Therefore, the
multimedia-enhanced treatment should have offered better instruction than the
textbook, since the multimedia lessons were narrated.
There are possible explanations for the lack of improved learning in the MM
Group. For instance, although participants in the MM Group were told to complete the
multimedia instruction, some may have decided to use the text instead. Students who
did not have high speed Internet may have given up going through the lessons. The
textbook also provided multimedia instruction, although in a more static fashion.
Additionally, this study was performed in a short time frame, with only a pre- and posttest. If the research had been conducted over several months or over a series of different
lessons, for example, the multimedia treatment may have produced greater learning
gains than the control group. Post-test scores may have been influenced by the pre-test,
which was identical. Also, the sample size (60) was limited. And, as stated above, the
83
research was conducted in an educational setting, where extraneous and confounding
variables are very difficult, if not impossible, to control.
A discussion of the higher than expected standard deviations was included in
Chapter Four. Although high standard deviations could indicate non-normal
distributions of data, it is the contention of this researcher that this was normal and
expected. Students in an introductory technology course normally demonstrate widely
variable computer skills. Also, it should be expected that test scores will vary in an area
in which students have little background knowledge due to guessing and other
uncontrollable variables. At the inception of this research, only 3.3% of the total
participants reported having a good understanding of Microsoft Access while fully 65%
reported having no experience with the software. To further resolve any potential
problems with higher than expected standard deviations, the researcher removed
outliers and also adjusted mean scores to reflect the median. However, after careful
examination of the data, the researcher determined that it was more prudent to include
all of the data, since it all appeared accurate.
Question Two
The other question of this research was: "Would learning be significantly affected
by a student’s computer self-efficacy, knowledge of database software, and/or
visual/verbal learning preference?"
The researcher discovered that the students who fell into the "Very Low"
category of the CUSE survey did not have significant gains from pre- to post-test scores.
These students also had the lowest post-test score of all of the CUSE groups. These
84
findings confirm other researchers' suggestions that a student's belief in his/her own
capabilities affects performance. Self-efficacy has already been identified as a positive
predictor of academic performance within the social sciences (Lee & Bobko, 1994),
English (Pajares & Johnson, 1994), mathematics (Pajares & Miller, 1995), and health
sciences (Eachus, 1993; Eachus & Cassidy, 1997). Did a student's lack of confidence in
computer skills affect the test scores and thus learning outcomes of the "Very Low"
category of the CUSE rankings? When conducting a regression analysis, higher CUSE
scores had no predictive value when compared to test score gains and pre- and post-test
scores. Therefore, while the researcher recognizes the importance of evaluating a
student's confidence with computers in technology applications courses, computer selfefficacy was not found to be a predictor of test scores in this research.
Also, gain scores were significantly higher for the MM Group than the No MM
Group in the Above Average CUSE ranking. The more confidence a student has with
computers might be a contributing factor in a student’s success with multimedia
instruction.
Experience with using Microsoft Access seemed to make a difference in student
gain scores when students ranked themselves as being "None" with experience using it
in both the MM and No MM groups. This variability in gain scores by these learners is
consistent with Mayer's (2003) individual differences principle, which states that lowerknowledge learners learn better from multimedia instruction. The researcher suggests
that developing and using multimedia instruction for lower-knowledge learners be an
essential part of an instructor's arsenal of multimedia tools.
85
The researcher discovered that Balanced and Moderate Verbal learners had
significant gain scores from pre- to post-tests, while the Moderate and Strong Visual
learners had nonsignificant gain scores. Additionally, the post-test scores of the
Moderate and Strong Visual learners were the lowest among the groups. This raises
some interesting questions: Did the textual format of the test instrument affect gain
scores in moderate and/or strong visual learners? Were the Balanced and Moderate
Verbal learners able to better negotiate the text-based format of the pre- and post-tests?
Is it possible that high speed Internet made a difference in the MM Group? This
was a question that arose from analyzing the data. As discussed in the previous chapter,
the difference in post-test score means between those who had high speed Internet and
those who did not in the MM Group was significant in the both parametric and nonparametric tests. Although the multimedia lessons were meant for online delivery and
were compressed to be deliverable in a reasonably quick format, the lessons would
download much more slowly on dial-up connections. Also, the multimedia lessons did
not provide students with information on how long they would take to download, a
potential deterrent to viewing. Having high speed Internet may make a difference in
learning when accessing online multimedia productions.
Recommendations for Future Research
Recommendations for future research are summarized below and then discussed
in more detail in the remainder of this chapter:
•
improve and refine definitions of multimedia instruction;
86
•
investigate the role that computer self-efficacy (CSE) plays in learning and
identify strategies that will improve a student’s CSE;
•
evaluate the enjoyment level of interaction with multimedia;
•
study the issue of time and how that relates to learning with multimedia;
•
research the effects of student control in multimedia learning; and
•
evaluate assessment design in learning.
Potential benefits of multimedia instruction are discussed throughout this study.
Multimedia-enhanced instruction may not be better than any other well-designed
learning environment, but it may offer different and additional options. Research tells us
that carefully chosen words and pictures can enhance a learner’s understanding of an
explanation better than words alone (Mayer, 2003). Therefore, a well-designed textbook
that includes "carefully chosen words and pictures" might very well eliminate the need
to spend additional time preparing multimedia instruction to supplement that text.
Instead, it might be more prudent to prepare customized and "just in time" multimedia
instruction to address individual student needs or areas of a textbook that do not
include these carefully chosen words and pictures. Future research should continue to
address the evolving fields of multimedia learning, the different formats it can take, and
the problems in designing research that can substantiate its effectiveness.
Knowing a student's computer self-efficacy and strategies on how to improve it if
low might be an essential element to learning in technology courses. As discovered in
this study, students with a "Very Low" computer self-efficacy did not significantly
improve their test scores. Also, gain scores were significantly higher in the MM Group
for those students who were Above Average in the CUSE scale. Might it be possible to
87
improve student learning by also improving their confidence with computers? Ways
of enhancing a student's computer self-efficacy would be an important contribution to
this field of research and might offer additional insights into improving learning.
Evaluating student enjoyment in using one method or the other would be
another valuable insight into multimedia instruction. While student test scores may not
differ, student experience with the multimedia instruction might be very different. For
instance, students may enjoy the multimedia better than reading and working through
the textbook, or they might feel more comfortable with this type of learning. On the
other hand, students in the age group of 30 and under, who described themselves as
more verbal learners might enjoy multimedia instruction more. It has been suggested
that a student's enjoyment of multimedia may increase his or her capacity to learn, since
they are more likely to persist (L. R. Rieber, 1996).
Other variables might be important in studying multimedia learning that were
not addressed in this study. The issue of time, related to the amount of time students
would take in completing multimedia instruction would be an appropriate variable.
Does it take more or less time to interact with multimedia instruction? If it takes more
time, does a student become disinterested and discontinue the instruction?
The issue of student control in multimedia learning would be another variable to
examine. The flexibility of multimedia technology permits the design of courses where
students can control not only the pacing of instruction, but also their navigation among
and within lessons. Mayer's interactivity principle states that students learn better when
they can control the presentation rate of multimedia explanations. In this research, all
multimedia lessons could be controlled by the student. However, there is conflicting
88
research on the effectiveness of learner control. It has been suggested that navigation
between and within lessons combined with unguided or minimally guided instruction
often inhibits learning for students with less prior knowledge of the course subject
matter (Tuovin & Sweller, 1999). On the other hand, strong instructional guidance with
the learning of more advanced students seems to also inhibit learning (deJong &
vanJoolingen, 1998). The idea of using multimedia to create individualized instruction
seems especially beneficial in this area, as some students may require more guidance
than others. Also, teachers’ knowledge of students' understanding of the material seems
highly important in the realm of individualized instruction. Although individualized
instruction can be provided in many instructional formats, multimedia instruction is
well-suited to address this need.
Another question that arose from this research is appropriate assessment design.
As the researcher discovered, students who rated themselves as moderate to strong
visual learners did not experience significant gain scores. Also, these two groups had the
lowest post-test scores. Therefore, the researcher posits that the type of assessment
might be considered a possible explanation for this lack of improvement. The
assessments for this research were entirely text-based. This does not really complement
nor mimic the instruction that the students received, both in the textbook and in the
multimedia treatment. Would it be more appropriate and sensible to include images in
the assessments, to match the instruction? As the researcher discovered, students who
rated themselves as moderate to strong visual learners did not have significant gain
scores. As the Educational Technology Assessment (ETA) becomes more sophisticated,
both in its delivery and design, a research study exploring possible differences in scoring
89
outcomes between a text-based assessment and one that includes images could be a
worthwhile and very informative project.
High speed Internet seemed to have an effect on student learning in the MM
Group of this study, which might prompt additional questions and research. How might
high speed Internet affect a student's interaction with multimedia instruction? While the
researcher examined this variable, it was not originally a research question. As noted
above, in research, it is impossible to ask all of the questions that arise, but this is a
question to explore further. How might a student's Internet speed affect interaction with
the lessons? Would a slower download speed deter lesson interaction?
Directives to explore more in the arena of multimedia learning are implicitly
understood in this research as instructors need to keep abreast of new technologies and
how to use them. Although this research did not result in measurable differences or
improvements in learning due to the multimedia treatment, insights and new questions
offered by the additional variables provided rich resources for further research. The new
generation of students entering our classrooms demand and expect sophisticated,
relevant, and accessible learning. Coupled with the understanding that “[e]ffective
instruction, independent of particular media, is based upon the selection and
organization of instructional strategies, and not simply the medium per se” (Hannfin &
Hooper, 1993, p. 192), we need to continue to evaluate multimedia as one element in the
complex, ever-changing structure of teaching.
90
REFERENCES
Abdal-Haqq, I. (1998). Constructivism in teacher education: Considerations for those who
would link practice to theory (No. EDOSP978). Washington, DC: U.S. Department
of Education.
Academic Affairs Annual Report. (2002). Boise, ID: Boise State University.
Allen, I. E., & Seaman, J. (2004). Entering the mainstream: The quality and extent of online
education in the United States. Needham, MA: The Sloan Consortium.
Allen, M. W. (2003). Michael Allen's guide to e-learning: Building interactive, fun, and
effective learning programs for any company. Hoboken, NJ: John Wiley & Sons.
Bachman, H. (1995). The online classroom for adult learners: An examination of teaching style
and gender equity. Blacksburg, VA: Virginia Polytechnic Institute and State
University.
Baddeley, A. D. (1999a). Human memory. Needham Heights, MA: Allyn & Bacon.
Baddeley, A. D. (1999b). Working memory. New York: Oxford University Press.
Baggett, P. (1984). Role of temporal overlap of visual and auditory material in forming
dual media associations. Journal of Educational Psychology, 76, 408-417.
Baggett, P. (1989). Understanding visual and verbal messages. In H. Mandl & J. R. Levin
(Eds.), Knowledge acquisition from text and pictures. Amsterdam: Elsevier.
Baggett, P., & Ehrenfeucht, A. (1983). Encoding and retaining information in the visuals
and verbals of an educational movie. Educational Communications and Technology
Journal, 31, 23-32.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change.
Psychological Review, 84, 191-215.
Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychology, 37,
122-147.
Bandura, A., & Schunk, D. H. (1981). Cultivating competence, self-efficacy and intrinsic
interest through self-motivation. Journal of Personality and Social Psychology, 41,
586-598.
91
Barling, J., & Beattie, R. (1983). Self-efficacy beliefs and sales performance. Journal of
Organizational Behavior Management, 5, 41-51.
Bennett, G., & Green, F. P. (2001). Student learning in the online environment: No
significant difference? Quest, 53, 1-13.
Bischoff, W. R., Bisconer, S. W., Kooker, B. M., & Woods, L. C. (1996). Transactional
distance and interactive television in the distance education of health
professionals. American Journal of Distance Education, 10(3), 4-19.
Bodner, G., Klobuchar, M., & Geelan, D. (2000). The many forms of constructivism.
Retrieved September 17, 2000, 2000, from
http://www.univie.ac.at/cognition/constructivism/paper.html
Bork, A. (2001). What is needed for effective learning on the Internet? Educational
Technology and Society Retrieved October 11, 2004, 4, from
http://ifets.ieee.org/periodical/vol_3_2001/bork.html
Bouffard-Bouchard, T. (1990). Influence of self-efficacy on performance in a cognitive
task. The Journal of Social Psychology, 130, 353-363.
Brown, S. D., Lent, R. W., & Larkin, K. C. (1989). Self-efficacy as a moderator of
scholastic aptitude-academic performance relationships. Journal of Vocational
Behavior, 35, 64-75.
Busch, T. (1995). Gender differences in self-efficacy and attitudes toward computers.
Journal of Educational Computing Research, 12, 147-158.
Carswell, L., Thomas, P., Petre, P., Price, M., & Richards, M. (2000). Distance education
via the Internet: The student experience British Journal of Education Technology,
31(1), 29-46.
Cassidy, S., & Eachus, P. (2002). Developing the computer user self-efficacy (CUSE)
scale: Investigating the relationship between computer self-efficacy, gender and
experience with computers. Journal of Educational Computing Research, 26(2), 21.
Chen, Y. J., & Willits, F. K. (1999). Dimensions of educational transactions in a
videoconferencing learning environment. American Journal of Distance Education,
13(1), 45-49.
Clark, R. C., & Mayer, R. E. (2003). e-Learning and the science of instruction. San Francisco:
Pfeiffer.
Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational
Research, 53, 445-459.
92
Collins, B., Hemmeter, M., Schuster, J., & Stevens, K. (1996). Using team teaching to
deliver coursework via distance learning technology. Paper presented at the Rural
goals 2000: Building programs that work, Baltimore, MD.
Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: Development of a
measure and initial test. MIS Quarterly, 19(2), 189-211.
Cooper, L. W. (2001). A comparison of online and traditional computer applications
classes. T.H.E. Journal, 28(8), 52-55.
Cronback, L., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for
research on interactions. New York: Irvington.
Curry, L. (1983). An organization of learning styles theory and constructs (ERIC document
No. 235185). Washington, DC.
Decker, C. A. (1998). Training transfer: Perceptions of computer use self-efficacy among
employees. Journal of Technical and Vocational Education, 14(2), 23-39.
de Jong, T., & van Joolingen, W. R. (1998). Scientific discovery learning with computer
simulations of conceptual domains. Review of Educational Research, 68(2), 179-201.
Denman, W. (1995). Bridging the gap: Teaching a basic public speaking course over satellite
television. Paper presented at the 81st Annual Meeting of the Speech
Communication Association, San Antonio, TX.
Dillon, A., & Gabbard, R. (1998). Hypermedia as an educational technology: A review of
the quantitative research literature on learner comprehension, control, and style.
Educational Psychology 81, 240-246.
Eachus, P. (1993). Development of the health student self-efficacy scale. Perceptual and
Motor Skills, 77, 670.
Eachus, P., & Cassidy, S. (1997). Self-efficacy, locus of control and styles of learning as
contributing factors in the academic performance of student health professionals. Paper
presented at the Proceedings of the First Regional Congress of Psychology for
Professionals in the Americas, Mexico City.
Eisner, E. (2005). Back to whole. Educational Leadership, 63(1), 14-18.
Ellery, P., Estes, S., & Forbus, W. (1998). Introduction. Quest, 50, 329-331.
Ellis, T., & Cohen, M. (2001). Integrating multimedia into a distance learning
environment: Is the game worth the candle? British Journal of Educational
Technology, 32(4), 495-497.
93
English, F. W. (1992). Deciding what to teach and test: Developing, aligning, and auditing
the curriculum. Newbury Park, CA: Corwin Press.
Ertmer, P. A., Everbeck, E., Cennamo, K. S., & Lehman, J. D. (1994). Enhancing selfefficacy for computer technologies through the use of positive classroom
experiences. Educational Technology Research & Development, 42(3), 45-62.
Felder, R. M. (1996). Matters of style. ASEE Prism, 6(4), 18-23.
Felder, R. M. (2006). Resources in science and engineering education. Retrieved March
17, 2006 from http://www.ncsu.edu/felder-public/RMF.html
Felder, R. M., & Silverman, L. K. (1988). Learning and teaching styles in engineering
education. Engineering Education, 78(7), 674-681.
Felder, R. M., & Spurlin, J. (2005). Applications, reliability, and validity of the Index of
Learning Styles. International Journal of Engineering Education, 21(1), 103-112.
Field, A. (2003). Discovering statistics using SPSS for Windows: Advanced techniques for the
beginner. London: SAGE Publications.
Fleming, M., & Levie, W. H. (Eds.). (1993). Instructional message design (2nd ed.).
Englewood Cliffs, NJ: Educational Technology Publications.
Frear, V., & Hirschbuhl, J. J. (1999). Does interactive multimedia promote achievement
and higher level thinking skills for today's science students? British Journal of
Education Technology, 30(4), 323-329.
Gagne, E. D., Yekovich, C. W., & Yekovich, F. R. (1993). The cognitive psychology of school
learning (2nd ed.). New York: BasicBooks.
Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research: An introduction (7th ed.).
Boston: Allyn & Bacon.
Gayol, Y. (1995). The use of computer networks in distance education: Analysis of the
patterns of electronic interactions in a multinational course. ACSDE Research
Monograph, 13.
Gillespie, F. (1998). Instructional design for the new technologies. New Directions for
Teaching and Learning, 76, 39-52.
Gorsky, P., & Caspi, A. (2005). A critical analysis of transactional distance theory. The
Quarterly Review of Distance Education, 6(1), 1-11.
Green, K. C. (2003). Chapter two: New beginnings. Syllabus Retrieved October 11, 2004,
from http://www.campus-technology.com/article.asp?id=7629
94
Green, K. C. (2004). The 2004 national survey of information technology in U.S. higher
education. Encino, CA: The Campus Computing Project.
Gretes, J. A., & Green, M. (2000). Improving undergraduate learning with computerassisted assessment. Journal of Research on Computing in Education, 33(1), 46-54.
Hackett, G., & Betz, N. E. (1989). An exploration of the mathematics selfefficacy/mathematics performance correspondence. Journal for Research in
Mathematics Education, 20, 261-273.
Hannfin, M. J., & Hooper, S. R. (1993). Chapter 4: Learning principles. In M. Fleming &
W. H. Levie (Eds.), Instructional Message Design (pp. 191-231). Englewood Cliffs,
NJ: Educational Technology Publications.
Herrington, J., & Oliver, R. (1999). Using situated learning and multimedia to investigate
higher-order thinking. Journal of Interactive Learning Research, 10(1), 3-24.
Hill, T., Smith, N. D., & Mann, M. F. (1987). Role of efficacy expectations in predicting
the decision to use advanced technologies: The case of computers. Journal of
Applied Psychology, 72(2), 307-313.
Honey, P. (2001). E-learning: A performance appraisal and some suggestions for
improvement. Learning Organization, 8(5), 200-202.
Idrus, R. M., & Lateh, H. H. (2000). Online distance education at the Universiti Sains
Malaysia, Malaysia: Preliminary perceptions. Educational Media International,
37(3), 197-201.
Jonassen, D. (2000). Computers as mindtools for schools: Engaging critical thinking (2nd ed.).
Upper Saddle River, NJ: Prentice Hall, Inc.
Jung, I. (2001). Building a theoretical framework of web-based instruction in the context
of distance education. British Journal of Education Technology, 32(5), 525-534.
Kearsley, G. (2004). General model of dual coding theory. Retrieved January 2, 2006
from http://home.sprynet.com/~gkearsley
Kekkonen-Moneta, S., & Moneta, G. B. (2002). E-learning in Hong Kong: comparing
learning outcomes in online multimedia and lecture versions of an introductory
computing course. British Journal of Education Technology, 33(4), 423-433.
Keller, J., & Burkman, E. (1993). Chapter 1: Motivation principles. In M. Fleming & W. H.
Levie (Eds.), Instructional message design (pp. 3-53). Englewood Cliffs, NJ:
Educational Technology Publications.
Kettanurak, V., Ramamurthy, K., & Haseman, W. D. (2001). User attitude as a mediator
of learning performance improvement in an interactive multimedia environment:
95
An empirical investigation of the degree of interactivity and learning styles.
International Journal of Human-Computer Studies, 54(4), 541-583.
Kinzie, M. B., Delcourt, M. A. B., & Powers, S. M. 1994. Computer technologies:
Attitudes and self-efficacy across undergraduate disciplines. Research on Higher
Education, 35(6), 745-768.
Koul, R., & Rubba, P. (1999). An analysis of the reliability and validity of personal
nternet teaching efficacy beliefs scale [Electronic Version]. Electronic Journal of
Science Education, 4. Retrieved February 7, 2006 from
http://unr.edu/homepage/crowther/ejse/koulrubba.html.
Kozmo, R. B. (1991). Learning with media. Review of Educational Research, 61, 179-211.
Laurillard, D. (1998). Multimedia and the learner's experience of narrative. Computers
and Education, 31, 229-242.
Laurillard, D. (2003). Rethinking university teaching: A conversational framework for the
effective use of learning technologies. London: RoutledgeFalmer.
Lee, C., & Bobko, P. (1994). Self-efficacy beliefs: Comparison of five measures. Journal of
Applied Psychology, 79, 364-369.
Lewis, L., Alexander, D., & Westat, E. F. (1997). Distance education in higher education
institutions (No. NCES 98-062). Washington, DC: National Center for Education
Statistics.
Litzinger, T. A., Lee, S. H., Wise, J. C., & Felder, R. M. (2005). A study of the reliability and
validity of the Felder-Soloman Index of Learning Styles. Paper presented at the ASEE
Annual Conference.
Livesay, G. A., Dee, K. C., Nauman, E. A., & L. S. Hites, J. (2002). Engineering student
learning styles: A statistical analysis using Felder's Index of Learning Styles. Paper
presented at the ASEE Annual Conference, Montreal, Quebec.
Lloyd, B. H., & Gressard, C. (1984). Reliability and factorial validity of computer attitude
scales. Educational and Psychological Measurement, 42(2), 501-505.
Lorch, R. F. (1989). Text signaling devices and their effects on reading and memory
processes. Educational Psychology Review, 1, 209-234.
MacKinnon, A., & Scarff-Seatter, C. (1997). Constructivism: Contradictions and
confusion in teacher education. In V. Richardson (Ed.), Constructivist teacher
education: Building new understandings (pp. 38-55). London: The Falmer Press.
Marchionini, G. (1988). Hypermedia and learning: Freedom and chaos. Educational
Technology, 28(11), 8-12.
96
Martinez, M. (1999). A new paradigm for successful learning on the Web. International
Journal of Education Technology, 2(2).
Mayer, R. E. (1989). Systematic thinking fostered by illustrations in scientific text. Journal
of Educational Psychology, 81, 240-246.
Mayer, R. E. (2002). Cognitive theory and the design of multimedia instruction: An
example of the two-way street between cognition and instruction. New Directions
for Teaching and Learning, 89, 55-71.
Mayer, R. E. (2003). Multimedia learning. Cambridge: Cambridge University Press.
Mayer, R. E., & Anderson, R. B. (1991). Animations need narration: An experimental test
of a dual-processing systems in working memory. Journal of Educational
Psychology, 90, 312-320.
Mayer, R. E., Bove, W., Bryman, A., Mars, R., & Tapangco, L. (1996). When less is more:
Meaningful learning from visual and verbal summaries of science textbook
lessons. Journal of Educational Psychology, 88, 64-73.
Mayer, R. E., & Gallini, J. K. (1990). When is an illustration worth ten thousand words?
Journal of Educational Psychology, 88, 64-73.
McFarland, D. (1996). Multimedia in higher education. The Katharine Sharp Review 3,
from http://alexia.lis.uiuc.edu/review/summer1996/mcfarland.html
Moore, M. G. (1972). Learner autonomy: The second dimension of independent learning
Convergence, 5(2), 76-88.
Moore, M. G. (1993). Transactional distance theory. In D. Keegan (Ed.), Theoretical
principles of distance education. New York: Routledge.
Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. New York:
Wadsworth.
Moreno, R., & Mayer, R. E. (1999). Visual representations in multimedia learning: Conditions
that overload visual working memory. Amsterdam: Third International Conference
on Visual Information Systems.
Moreno, R., & Mayer, R. E. (2002). Learning science in virtual reality multimedia
environments: Role of methods and media. Journal of Educational Psychology,
94(3), 598-610.
Oblinger, D. G., & Oblinger, J. L. (Eds.). (2005). Educating the Net Generation. Boulder,
CO: e-book.
97
Okamoto, T., Cristea, A., & Kayama, M. (2001). Future integrated learning
environments with multimedia. Journal of Computer Assisted Learning, 17(1), 4-12.
Oliver, M., MacBean, J., Conole, G., & Harvey, J. (2002). Using a toolkit to support the
evaluation of learning. Journal of Computer-Assisted Learning, 18(2), 199-208.
Paivio, A. (1986). Mental representations: A dual coding approach. Oxford: Oxford
University Press.
Pajares, F., & Johnson, M. J. (1994). Confidence and competence in writing: The role of
self-efficacy, outcome expectancy and apprehension. Research in the Teaching of
English, 28, 313-331.
Pajares, F., & Miller, M. D. (1995). Mathematics: Self-efficacy and mathematics
performances: The need for specificity of assessment. Journal of Counselling
Psychology, 42, 190-198.
Pasquinelli, A. (1998). Higher education and information technology: Trends and issues. Palo
Alto, CA: Sun Microsystems.
Pelz, B. (2004). (My) three principles of effective online pedagogy. JALN, 8(3), 33-46.
Pollard, C., VanDehey, T., & Pollard, R. (2005). Curricular computing: Essential skills for
teachers. Boise, ID: Boise State University.
Quitadamo, I. J., & Brown, A. (2001). Effective teaching styles and instructional design for
online learning environments. Paper presented at the National Education
Computing Conference, Building on the Future, Chicago, IL.
Rieber, L. P. (1990). Animation in computer-based instruction. Educational Technology
Research and Development, 38, 77-87.
Rieber, L. R. (1996). Seriously considering play: Designing interactive learning
environments based on the blending of microworlds, simulations, and games.
Educational Technology Research & Development, 44(2), 43-58.
Rintala, J. (1998). Computer technology in higher education. Educational Technology
Research and Development, 38, 77-87.
Roberts, G. (2005). Technology and learning expectations of the Net Generation. In D. G.
Oblinger & J. L. Oblinger (Eds.), Educating the Net Generation: EduCause e-Book.
Russell, T. (1999). The no significant difference phenomenon. Chapel Hill, NC: Office of
Instructional Telecommunications, University of North Carolina.
Saba, F., & Shearer, R. L. (1994). Verifying the key theoretical concepts in a dynamic
model of distance education. American Journal of Distance Education, 8(1), 36-59.
98
Sadoski, M., & Paivio, A. (2001). Imagery and text: A dual coding theory of reading and
writing. Mahway, NJ: Lawrence Erlbaum Associates.
Schnackenberg, H. L., Sullivan, H. J., Leader, L. R., & Jones, E. E. K. (1998). Learner
preferences and achievement under differing amounts of learner practice.
Educational Technology Research and Development, 45, 5-15.
Schneider, M. (2000). From b-school to e-venture. Business Week Online.
Seery, N., Gaughran, W. F., & Waldmann, T. (2003). Multi-modal learning in engineering
education. Paper presented at the ASEE Annual Conference.
Soles, C., & Moller, L. (1995). Myers Briggs type preferences in distance learning
education. International Journal of Education Technology, 2(2).
Spurlin, J. (2002). unpublished manuscript.
Svetcov, D. (2000). The virtual classroom vs. the real one. Forbes, 166, 50-54.
The Horizon Report, 2005 edition. (2005). Stanford, CA: The New Media Consortium.
Torkzadeh, G., & Koufterous, X. (1994). Factorial validity of a computer self-efficacy
scale and the impact of computer training. Education and Psychological
Measurement, 54(3), 813-821.
Tuovin, J. E., & Sweller, J. (1999). A comparison of cognitive load associated with
discovery learning and worked examples. Journal of Educational Psychology, 91(2),
334-341.
Turley, S. (2005). Professional lives of teacher educators in an era of mandated reform
[Electronic Version]. Teacher Education Quarterly, 1. Retrieved February 4, 2006
from
http://www.findarticles.com/p/articles/mi_qa3960/is_200510/ai_n15743272.
van Zwanenberg, N., Wilkinson, L. J., & Anderson, A. (2000). Felder and Silverman's
Index of Learning Styles and Honey and Mumford's Learning Styles
Questionnaire: How do they compare and how do they predict? Educational
Psychology, 20(3), 365-381.
Vincent, A., & Ross, D. (2001). Learning style awareness: A basis for developing teaching
and learning strategies. Journal of Research on Technology in Education, 33(5).
Wang, L.-C. C., & Bagaka's, J. G. (2003). Understanding the dimensions of selfexploration in web-based learning environments. Journal of Research on Technology
in Education, 34(3), 364-373.
99
Wankowski, J. (1991). Success and failure at university. In K. Raaheim, J. Wankowski
& J. Radford (Eds.), Helping students to learn: Teaching, counseling, research (pp.
259-267). London: Society for Research into Higher Education & OUP.
Witkin, H. A., Lewis, H. B., Hertzman, M., Manchover, K., Meissner, P. B., & Wapner, S.
(1954). Personality through perception. New York: Harper.
Zywno, M. (2003). A contribution to validation of score meaning for Felder-Soloman's Index of
Learning Styles. Paper presented at the ASEE Annual Conference.
100
APPENDIX A
Computer User Self-Efficacy (CUSE) Survey
101
102
103
104
105
APPENDIX B
Index of Learning Styles (ILS) Survey
106
Directions
Please provide us with your full name. Your name will be printed on the information
that is returned to you.
Full Name
For each of the 44 questions below select either "a" or "b" to indicate your answer. Please
choose only one answer for each question. If both "a" and "b" seem to apply to you,
choose the one that applies more frequently. When you are finished selecting answers to
each question please select the submit button at the end of the form.
I understand something better after I
(a) try it out.
(b) think it through.
I would rather be considered
(a) realistic.
(b) innovative.
When I think about what I did yesterday, I am most likely to get
(a) a picture.
(b) words.
I tend to
(a) understand details of a subject but may be fuzzy about its overall structure.
(b) understand the overall structure but may be fuzzy about details.
When I am learning something new, it helps me to
(a) talk about it.
(b) think about it.
If I were a teacher, I would rather teach a course
(a) that deals with facts and real life situations.
(b) that deals with ideas and theories.
107
I prefer to get new information in
(a) pictures, diagrams, graphs, or maps.
(b) written directions or verbal information.
Once I understand
(a) all the parts, I understand the whole thing.
(b) the whole thing, I see how the parts fit.
In a study group working on difficult material, I am more likely to
(a) jump in and contribute ideas.
(b) sit back and listen.
I find it easier
(a) to learn facts.
(b) to learn concepts.
In a book with lots of pictures and charts, I am likely to
(a) look over the pictures and charts carefully.
(b) focus on the written text.
When I solve math problems
(a) I usually work my way to the solutions one step at a time.
(b) I often just see the solutions but then have to struggle to figure out the steps to get
to them.
In classes I have taken
(a) I have usually gotten to know many of the students.
(b) I have rarely gotten to know many of the students.
In reading nonfiction, I prefer
(a) something that teaches me new facts or tells me how to do something.
(b) something that gives me new ideas to think about.
I like teachers
(a) who put a lot of diagrams on the board.
(b) who spend a lot of time explaining.
When I'm analyzing a story or a novel
(a) I think of the incidents and try to put them together to figure out the themes.
(b) I just know what the themes are when I finish reading and then I have to go back
and find the incidents that demonstrate them.
When I start a homework problem, I am more likely to
(a) start working on the solution immediately.
(b) try to fully understand the problem first.
108
I prefer the idea of
(a) certainty.
(b) theory.
I remember best
(a) what I see.
(b) what I hear.
It is more important to me that an instructor
(a) lay out the material in clear sequential steps.
(b) give me an overall picture and relate the material to other subjects.
I prefer to study
(a) in a study group.
(b) alone.
I am more likely to be considered
(a) careful about the details of my work.
(b) creative about how to do my work.
When I get directions to a new place, I prefer
(a) a map.
(b) written instructions.
I learn
(a) at a fairly regular pace. If I study hard, I'll "get it."
(b) in fits and starts. I'll be totally confused and then suddenly it all "clicks."
I would rather first
(a) try things out.
(b) think about how I'm going to do it.
When I am reading for enjoyment, I like writers to
(a) clearly say what they mean.
(b) say things in creative, interesting ways.
When I see a diagram or sketch in class, I am most likely to remember
(a) the picture.
(b) what the instructor said about it.
When considering a body of information, I am more likely to
(a) focus on details and miss the big picture.
(b) try to understand the big picture before getting into the details.
109
I more easily remember
(a) something I have done.
(b) something I have thought a lot about.
When I have to perform a task, I prefer to
(a) master one way of doing it.
(b) come up with new ways of doing it.
When someone is showing me data, I prefer
(a) charts or graphs.
(b) text summarizing the results.
When writing a paper, I am more likely to
(a) work on (think about or write) the beginning of the paper and progress forward.
(b) work on (think about or write) different parts of the paper and then order them.
When I have to work on a group project, I first want to
(a) have "group brainstorming" where everyone contributes ideas.
(b) brainstorm individually and then come together as a group to compare ideas.
I consider it higher praise to call someone
(a) sensible.
(b) imaginative.
When I meet people at a party, I am more likely to remember
(a) what they looked like.
(b) what they said about themselves.
When I am learning a new subject, I prefer to
(a) stay focused on that subject, learning as much about it as I can.
(b) try to make connections between that subject and related subjects.
I am more likely to be considered
(a) outgoing.
(b) reserved.
I prefer courses that emphasize
(a) concrete material (facts, data).
(b) abstract material (concepts, theories).
For entertainment, I would rather
(a) watch television.
(b) read a book.
110
Some teachers start their lectures with an outline of what they will cover. Such
outlines are
(a) somewhat helpful to me.
(b) very helpful to me.
The idea of doing homework in groups, with one grade for the entire group,
(a) appeals to me.
(b) does not appeal to me.
When I am doing long calculations,
(a) I tend to repeat all my steps and check my work carefully.
(b) I find checking my work tiresome and have to force myself to do it.
I tend to picture places I have been
(a) easily and fairly accurately.
(b) with difficulty and without much detail.
When solving problems in a group, I would be more likely to
(a) think of the steps in the solution process.
(b) think of possible consequences or applications of the solution in a wide range of
areas.
When you have completed filling out the above form please click on the Submit button
below. Your results will be returned to you. If you are not satisfied with your answers
above please click on Reset to clear the form.
111
GLOSSARY
Active processing:
Meaningful learning occurs when learners engage in active
processing within the auditory-verbal channels and the visualpictorial channels, integrating them with each other and relevant
prior knowledge.
Asynchronous communications:
Ways of communicating online at different times, learnercontrolled.
Auditory-verbal channel:
Part of the human memory system that processes information that
enters through the ears and is mentally represented in the form of
word sounds.
Blog (Weblog):
Blog is short for weblog. A weblog is a journal that is frequently
updated and intended for general public consumption. Blogs
generally represent the personality of the author or the Web site
(www.bytowninternet.com/glossary).
The name "blog" is a truncated form of "web log" according to
Rebecca Blood's essay "Weblogs: A History and Perspective”
112
(http://www.rebeccablood.net/essays/weblog_history.html).
Blog is used to refer to sites that can best be described as minisites or mini-directories, populated with the site owner's personal
opinions. Blogs are now popular for business use as well
(www.thewebdivision.com/glossary.html).
Browser:
Short for Web browser, a software application used to locate and
display Web pages. The two most popular browsers are Netscape
Navigator and Microsoft Internet Explorer. Both of these are
graphical browsers, which mean that they can display graphics as
well as text. In addition, most modern browsers can present
multimedia information, including sound and video, though they
require plug-ins for some formats.
Cognitive learning theory:
An explanation of how people learn based on the concepts of dual
channels, limited capacity, and active learning.
Coherence principle:
People learn better from multimedia lessons when distracting
stories, graphics, and sounds are eliminated.
Computer self-efficacy:
Self-efficacy is defined as the belief in one’s ability to successfully
execute a certain course of behavior and might be considered a
significant variable to predicting individual behavior and
113
performance (Bandura, 1977). The suggestion made by
Bandura is that the perception that one has the capabilities to
perform a task will increase the likelihood that the task will be
completed successfully. For the purpose of this study, computer
self-efficacy will specifically relate to a person’s perceptions and
attitudes toward computers and computer technology and how
those perceptions and attitudes might affect their learning
outcomes (Cassidy & Eachus, 2002).
Contiguity principle:
People learn better when corresponding printed words and
graphics are placed close to one another on the screen or when
spoken words and graphics are presented at the same time.
Dual channel assumption:
Mayer’s assumption based upon cognitive learning theory that
humans posses two distinct channels for representing and
manipulating knowledge: a visual-pictorial channel and an
auditory channel.
Dual coding theory:
This theory assumes that there are two cognitive subsystems, one
specialized for the representation and processing of nonverbal
objects/events (i.e., imagery), and the other specialized for
dealing with language
114
Flash technology:
A bandwidth friendly and browser independent vector-graphic
animation technology. As long as different browsers are equipped
with the necessary plug-ins, Flash animations will look the same.
With Flash, users can draw their own animations or import other
vector-based images.
Hybrid courses:
Hybrid courses are courses in which a significant portion of the
learning activities have been moved online, and time traditionally
spent in the classroom is reduced but not eliminated. The goal of
hybrid courses is to join the best features of face-to-face (F2F)
teaching with the best features of online learning to promote
active independent learning and reduce class seat time.
Imagens:
Word coined by Paivio to define a type of representational unit for
mental images.
Information delivery theory:
An explanation of how people learn based on the idea that
learners directly absorb new information presented in the
instructional environment.
Limited capacity assumption:
Exemplified by auditory-verbal overload, when too many visual
materials are presented at one time.
115
Logogens:
Word coined by Paivio to define a type of representational unit for
verbal entities
Modality principle:
People learn more deeply from multimedia lessons when graphics
are explained by audio narration rather than onscreen text.
Multimedia:
For the purpose of this study, multimedia will be defined in
Mayer’s (2003) terminology: “the presentation of material using
both words and pictures” (p. 2), such as printed or spoken text,
and static or dynamic graphics.
Multimedia principle:
People learn more deeply from words and graphics than from
words alone.
Online learning environments:
Online learning environments consist of many different
characteristics. What distinguishes these learning environments
from traditional learning environments is they place more
emphasis over learning than teaching and can be characterized as
more learner-centered than the traditional teacher-centered
classrooms.
116
Personalization principle:
People learn more deeply from multimedia lessons when the
speaker uses conventional style rather than formal style.
Podcast:
Podcasting, a combination Apple's "iPod" and "broadcasting,” is a
method of publishing files to the Internet, allowing users to
subscribe to a feed and receive new files automatically by
subscription, usually at no cost. It first became popular in late
2004, used largely for audio files
(en.wikipedia.org/wiki/Podcast).
Plug-in:
A hardware or software module that adds a specific feature or
service to a larger system. The idea is that the new component
simply plugs in to the existing system. For example, there are
number of plug-ins for the Netscape Navigator browser that
enables it to display different types of audio or video messages.
RSS (Rich Site Summary or Really Simple Syndication):
RDF Site Summary, or Rich Site Summary, or Really Simple
Syndication: A lightweight XML format for distributing news
headlines and other content on the Web
(www.jisc.ac.uk/index.cfm).
117
Redundancy principle:
People learn more deeply from a multimedia lesson when
graphics are explained by audio narration alone rather than audio
narration and onscreen text.
Synchronous communications:
Ways of communication online at the same time, instructorcontrolled.
Transactional distance theory:
Theory explained by Moore (1972, p. 76) as “the family of
instructional methods in which the teaching behaviors are
executed apart from the learning behaviors, including those that
in contiguous teaching would be performed in the learners’
presence, so that communication between the teacher must be
facilitated by print, electronic, mechanical, or other devices.”
Three key elements define every online learning environment:
dialogue, structure, and learner autonomy. Dialogue refers to the
extent to which teachers and learners interact with each other,
structure refers to the responsiveness of instruction to a learner’s
needs, and learner autonomy corresponds to the extent to which
learners make decisions regarding their own learning and
construct their own knowledge.
118
Visual-pictorial channel:
Part of the human memory system that processes information
received through the eyes and mentally represents this in pictorial
form
Web-based instruction:
Web-based instruction (sometimes called e-learning) is anywhere,
any-time instruction delivered over the Internet or a corporate
intranet to browser-equipped learners. There are two primary
models of Web-based instruction: synchronous (instructorfacilitated, same time) and asynchronous (self-directed, self-paced,
anytime). Instruction can be delivered by a combination of static
methods (learning portals, web pages, screen tutorials, streaming
audio/video) and interactive methods (threaded discussions,
chats, and online presentations).
Wiki:
A website or similar online resource which allows users to add
and edit content collectively (www.parliament.vic.gov.au/sarc/EDemocracy/Final_Report/Glossary.htm).
A collection of websites of hypertext, each of them can be visited
and edited by anyone. “Wiki wiki” means "rapidly" in the
Hawaiian language
(www.cpsr-peru.org/english_version/privacy_ngo/part4).
119
XML (Extensible Markup Language):
A flexible way to create common information formats and share
both the format and the data on the World Wide Web, intranets,
and elsewhere. XML is a formal recommendation from the World
Wide Web Consortium (W3C) similar to the language of today's
Web pages, the Hypertext Markup Language (HTML)
(www.netproject.com/docs/migoss/v1.0/glossary.html).