Investigating effects of computer

University of Iowa
Iowa Research Online
Theses and Dissertations
Summer 2011
Investigating effects of computer-based grammar
tutorials
Anna Kolesnikova
University of Iowa
Copyright 2011 Anna Kolesnikova
This dissertation is available at Iowa Research Online: http://ir.uiowa.edu/etd/1156
Recommended Citation
Kolesnikova, Anna. "Investigating effects of computer-based grammar tutorials." PhD (Doctor of Philosophy) thesis, University of
Iowa, 2011.
http://ir.uiowa.edu/etd/1156.
Follow this and additional works at: http://ir.uiowa.edu/etd
Part of the First and Second Language Acquisition Commons
INVESTIGATING EFFECTS OF COMPUTER–BASED GRAMMAR TUTORIALS
by
Anna Kolesnikova
An Abstract
Of a thesis submitted in partial fulfillment
of the requirements for the Doctor of
Philosophy degree in Second Language Acquisition
in the Graduate College of
The University of Iowa
July 2011
Thesis Supervisor: Associate Professor Judith Liskin-Gasparro
1
ABSTRACT
This dissertation study examined a broad question of whether computer-based
grammar tutorials are effective and welcome tools to review grammar for language
learners by investigating effects of three different modes of such tutorials on learners’
knowledge and satisfaction. For this study, I developed experimental tutorials in three
different modes (a static text with a voice-over narration, an animated text with a voiceover narration, and a recording of a real teacher) for two target structures of German
grammar (regular verb conjugation and separable-prefix verbs).
In total, there were more than 100 Elementary German students at two public
Midwestern universities, who participated in different stages of the study. The
participants represented a mostly homogeneous group with characteristics that are
common for college-level learners.
There were two parallel experiments in this study that employed identical
methods but focused on two different target structures. Thus, both experiments examined
the effect of the three study tutorials on learners’ knowledge and satisfaction, but
Experiment 1 focused on the regular verb conjugation, whereas Experiment 2 focused on
the separable-prefix verbs. For each experiment, the participants completed a pretest,
worked with the assigned tutorial mode, completed a posttest, and filled out a number of
questionnaires.
The results of the analysis demonstrated that the study tutorials helped learners to
significantly improve their knowledge of grammar; however, the mode of the tutorial did
not make a difference. Likewise, all modes of tutorial received similar satisfaction
ratings; however, additional qualitative analysis suggested that a considerable number of
the participants preferred the animated mode.
The findings of the study demonstrate that computer-based grammar tutorials can
be effective and welcome tools to review grammar for language learners. Moreover,
2
tutorials of this type can be a viable method of achieving the desired balance between the
form- and meaning-focused activities in language classrooms. Also, such tutorials appeal
to learners because they support more individualized learning.
Abstract Approved: ____________________________________
Thesis Supervisor
____________________________________
Title and Department
____________________________________
Date
INVESTIGATING EFFECTS OF COMPUTER–BASED GRAMMAR TUTORIALS
by
Anna Kolesnikova
A thesis submitted in partial fulfillment
of the requirements for the Doctor of
Philosophy degree in Second Language Acquisition
in the Graduate College of
The University of Iowa
July 2011
Thesis Supervisor: Associate Professor Judith Liskin-Gasparro
Copyright by
ANNA KOLESNIKOVA
2011
All Rights Reserved
Graduate College
The University of Iowa
Iowa City, Iowa
CERTIFICATE OF APPROVAL
_______________________
PH.D. THESIS
_______________
This is to certify that the Ph.D. thesis of
Anna Kolesnikova
has been approved by the Examining Committee
for the thesis requirement for the Doctor of Philosophy
degree in Second Language Acquisition at the July 2011 graduation.
Thesis Committee: ___________________________________
Judith Liskin-Gasparro, Thesis Supervisor
___________________________________
Stephen Alessi
___________________________________
Kathy Schuh
___________________________________
Bruce Spencer
___________________________________
James Maxey
___________________________________
George Woodworth
To my parents, Nadezda and Ilya.
ii
ACKNOWLEDGMENTS
Working on my dissertation has been an exciting journey and I would like to
thank everyone who contributed to this research project. I owe my deepest gratitude to
my dissertation advisor, Dr. Judy Liskin-Gasparro, for her constant support and
encouragement of my project from its initial to the final stages. I also am heartily
thankful to all my committee members—Drs. Steve Alessi, Jim Maxey, Kathy Schuh,
Bruce Spencer, and George Woodworth—for their timely cooperation and insightful
feedback that guided my work on this project.
This project would not have been possible without help of my long-time friend,
Fatima Baig, who bravely agreed to be the instructor in my experimental tutorials and
patiently assisted me in creating them all the way through.
I am also very grateful to Professors Bruce Spencer at and John Balong, who lent
their time as content experts and to Regina Range for her help as a second rater for this
study. Their timely assistance and expert opinions were invaluable for assuring the
validity of materials and analysis.
I am very grateful to all instructors, teaching assistants, and to over 200 German
students who participated in various parts of my research throughout the years. Also, I
wish to thank Rebecca Bohde for assisting me with the technical issues in the preparation
for the data collection.
I would like to thank the Foreign Language Acquisition Research and Education
(FLARE) program, and especially Dr. Roumyana Slabakova, for providing financial
support of my project. I would like to thank all professors in FLARE for sparking my
interest in research and teaching me by example what good research practices are.
Lastly, my deepest appreciation goes to my husband Jim for his patience and
support of me on my dissertation journey.
iii
ABSTRACT
This dissertation study examined a broad question of whether computer-based
grammar tutorials are effective and welcome tools to review grammar for language
learners by investigating effects of three different modes of such tutorials on learners’
knowledge and satisfaction. For this study, I developed experimental tutorials in three
different modes (a static text with a voice-over narration, an animated text with a voiceover narration, and a recording of a real teacher) for two target structures of German
grammar (regular verb conjugation and separable-prefix verbs).
In total, there were more than 100 Elementary German students at two public
Midwestern universities, who participated in different stages of the study. The
participants represented a mostly homogeneous group with characteristics that are
common for college-level learners.
There were two parallel experiments in this study that employed identical
methods but focused on two different target structures. Thus, both experiments examined
the effect of the three study tutorials on learners’ knowledge and satisfaction, but
Experiment 1 focused on the regular verb conjugation, whereas Experiment 2 focused on
the separable-prefix verbs. For each experiment, the participants completed a pretest,
worked with the assigned tutorial mode, completed a posttest, and filled out a number of
questionnaires.
The results of the analysis demonstrated that the study tutorials helped learners to
significantly improve their knowledge of grammar; however, the mode of the tutorial did
not make a difference. Likewise, all modes of tutorial received similar satisfaction
ratings; however, additional qualitative analysis suggested that a considerable number of
the participants preferred the animated mode.
iv
The findings of the study demonstrate that computer-based grammar tutorials can
be effective and welcome tools to review grammar for language learners. Moreover,
tutorials of this type can be a viable method of achieving the desired balance between the
form- and meaning-focused activities in language classrooms. Also, such tutorials appeal
to learners because they support more individualized learning.
v
TABLE OF CONTENTS
LIST OF TABLES ............................................................................................................. ix
LIST OF FIGURES .......................................................................................................... xii
CHAPTER 1 INTRODUCTION .........................................................................................1
Purpose of the study and research questions ....................................................3 Research design ................................................................................................5 Overview of the study procedures .............................................................7 Need for the study ...........................................................................................13 Theoretical need ......................................................................................13 Practical need...........................................................................................16 Overview of the chapters ................................................................................18 Glossary of terms ............................................................................................19 General terminology ................................................................................20 Study-specific terminology .....................................................................20 CHAPTER 2 LITERATURE REVIEW ............................................................................21
Views on formal grammar instruction ............................................................22 Theoretical perspective ............................................................................22 Perspectives from teaching and research .................................................25 Learners’ perspective...............................................................................27 Summary..................................................................................................29 Methods in explicit grammar instruction ........................................................30 Form-focused instruction.........................................................................30 Isolated grammar instruction ...................................................................31 Deductive grammar instruction ...............................................................34 Summary..................................................................................................37 Grammar and CALL .......................................................................................37 Computers and grammar instruction .......................................................37 Computer-based grammar tutorials and teachers ....................................45 Computer-based grammar tutorials and students ....................................50 Summary..................................................................................................52 Tutorials and instructional design ...................................................................53 Cognitive theory of multimedia learning ................................................53 Applicable design considerations ............................................................59 Ways of enhancing an instructional presentation ....................................67 Summary..................................................................................................75 Previous research ............................................................................................75 Moreno et al. (2001) ................................................................................76 Caplan (2002) ..........................................................................................76 Roche and Scheller (2008) ......................................................................77 Kolesnikova (2011) .................................................................................78 Summary..................................................................................................81 Summary .........................................................................................................81 vi
CHAPTER 3 METHODOLOGY .....................................................................................84
Research questions..........................................................................................84 Pilot studies prior to the main study ...............................................................85 Pilot I .......................................................................................................85 Pilot II ......................................................................................................86 Participants’ characteristics and sampling procedures ...................................87 Subject population 1 ................................................................................87 Subject population 2 ................................................................................87 Sampling procedures ...............................................................................88 Demographic characteristics ...................................................................90 Research design ..............................................................................................90 Target structures ......................................................................................90 Instruments ..............................................................................................94 Study procedures ..........................................................................................113 Randomization .......................................................................................114 The settings for the study procedures ....................................................116 Description of the study procedures ......................................................117 Participant’s time commitment..............................................................121 Assuring control over the variables ..............................................................122 Working with the collected data ...................................................................123 Test results .............................................................................................123 Questionnaire data .................................................................................125 Interview data ........................................................................................126 Summary .......................................................................................................127 CHAPTER 4 ANALYSIS...............................................................................................128
Research question 1: Effects on learners’ knowledge ..................................128 Exclusion criteria ...................................................................................129 Data........................................................................................................130 Analysis .................................................................................................131 Experiment 1 (VC) ................................................................................132 Experiment 2 (SPV) ..............................................................................136 Summary for research question 1 ..........................................................143 Research question 2: Effects on learners’ satisfaction .................................143 Exclusion criteria ...................................................................................144 Data........................................................................................................145 Analysis .................................................................................................145 Experiment 1 (VC) ................................................................................146 Experiment 2 (SPV) ..............................................................................152 Analysis across the experiments ............................................................158 Qualitative analysis ...............................................................................167 Summary for research question 2 ..........................................................190 Summary .......................................................................................................191 CHAPTER 5 DISCUSSION ............................................................................................192
Findings for research question 1 ...................................................................192 Findings for research question 2 ...................................................................193 Limitations of the study ................................................................................197 Theoretical value of this study ......................................................................198 Practical value of this study ..........................................................................201 Suggestions for future research ....................................................................202 vii
Conclusion ....................................................................................................204 REFERENCES ................................................................................................................206 APPENDIX A STUDY INFORMATION SHEET ........................................................220 APPENDIX B SCRIPT FOR THE VC TUTORIALS ...................................................221 APPENDIX C SCRIPT FOR THE SPV TUTORIALS .................................................224 APPENDIX D ONLINE INSTRUCTIONS FOR THE INSTRUCTORS .....................228 APPENDIX E SURVEY FOR THE INSTRUCTORS ..................................................229 APPENDIX F VC TEST ................................................................................................231 APPENDIX G SPV TEST ..............................................................................................233 APPENDIX H DESIGN QUESTIONNAIRE ................................................................235 APPENDIX I SATISFACTION QUESTIONNAIRE IN EXPERIMENT 1 (VC) ........236 APPENDIX J SATISFACTION QUESTIONNAIRE IN EXPERIMENT 2 (SPV) ......239 APPENDIX K DEMOGRAPHIC QUESTIONNAIRE .................................................242 APPENDIX L INTERVIEW QUESTIONS ...................................................................243 APPENDIX M INSTRUCTIONS FOR THE PARTICIPANTS.....................................244 APPENDIX N SCORING CRITERIA FOR THE TARGET STRUCTURES ..............245 APPENDIX O OUTPUT FOR THE STATISTICAL ANALYSES FOR RQ 1 ............250
Results for pretest-posttest comparison (VC) ...............................................250 Results for ratings of test difficulty (VC) .....................................................251 Results for pretest-posttest comparison (SPV) .............................................252 Results for pretest-posttest comparison (SPV, partial) .................................253 Results for ratings of test difficulty (SPV) ...................................................254 APPENDIX P RESULTS FOR RQ 1 WITH MISSING VALUES ADDED .................255
Results for pretest-posttest comparison (VC) ...............................................255 Results for ratings of test difficulty (VC) .....................................................256 Results for pretest-posttest comparison (SPV) .............................................257 Results for ratings of test difficulty (SPV) ...................................................258 APPENDIX Q RESULTS FOR RQ 2 WITH EXCLUSION CRITERIA APPLIED ....260 viii
LIST OF TABLES
Table 1. Results of testing with intermediate learners of German .....................................12 Table 2. History of changes in grammar teaching .............................................................35 Table 3. Number of participants in different stages of the study .......................................89 Table 4. Demographic characteristics of the participants ..................................................91 Table 5. Participants’ evaluations of the quality of rule explanations ...............................97 Table 6. Duration of the study tutorials ...........................................................................102 Table 7. Development and production time for the study tutorials .................................103 Table 8. Ratings of the quality of tutorials based on Likert scale items ..........................105 Table 9. Ratings of the quality of tutorial design based on multiple-choice items ..........106 Table 10. Test completion times during Pilot II ..............................................................108 Table 11. Results of reliability testing of the study tests .................................................109 Table 12. Number of respondents to the design questionnaire ........................................111 Table 13. Time commitment for the study procedures ....................................................121 Table 14. Information about interview participants .........................................................127 Table 15. Number of participants included in the analysis for RQ 1 ..............................130 Table 16. Descriptive statistics for the RQ 1 data in Experiment 1 (VC) .......................132 Table 17. Descriptive statistics for the data for RQ 1 in Experiment 2 (SPV) ................138 Table 18. Data for the satisfaction ratings (VC) ..............................................................147 Table 19. Data for the satisfaction ratings (SPV) ............................................................153 Table 20. Responses about of tutorials’ availability and use (all groups) .......................159 Table 21. Satisfaction ratings for static tutorials across the experiments ........................162 Table 22. Satisfaction ratings for animated tutorials across the experiments ..................162 Table 23. Satisfaction ratings for the tutorials with a teacher recording across
experiments .............................................................................................................163 Table 24. Preferences for tutorial modes by experiment .................................................165 Table 25. Preferences for tutorial modes across experiments..........................................166 ix
Table O1. Multivariate tests for test scores (VC).............................................................250 Table O2. Tests of between-subjects effects for test scores (VC) ...................................250 Table O3. Multivariate tests for test difficulty ratings (VC) ............................................251 Table O4. Tests of between-subjects effects for test difficulty ratings (VC) ..................251 Table O5. Multivariate tests for test scores (SPV) ...........................................................252 Table O6. Tests of between-subjects effects for test scores (SPV) .................................252 Table O7. Multivariate tests for test scores (SPV, partial scoring) ..................................253 Table O8. Tests of between-subjects effects for test scores (SPV) .................................253 Table O9. Multivariate tests for test difficulty ratings (SPV) ..........................................254 Table O10. Tests of between-subjects effects for test difficulty ratings (SPV)...............254 Table P1. Descriptive statistics for test scores (VC, missing values added) ...................255 Table P2. Multivariate tests for test scores (VC, missing values added) .........................255 Table P3. Tests of between-subjects effects for test scores (VC, missing values
added)......................................................................................................................256 Table P4.Descriptive statistics for test difficulty ratings (VC, missing values added)....256 Table P5. Tests of between-subjects effects for test difficulty ratings (VC, missing
values added) ..........................................................................................................256 Table P6. Multivariate tests for test difficulty ratings (VC, missing values added) ........257 Table P7. Descriptive statistics for test scores (SPV, missing values added)..................257 Table P8. Multivariate tests for test scores (SPV, missing values added) .......................258 Table P9. Tests of between-subjects effects for test scores (SPV, missing values
added)......................................................................................................................258 Table P10. Descriptive statistics for test difficulty ratings (SPV, missing values
added)......................................................................................................................258 Table P11. Tests of between-subjects effects for test difficulty ratings (SPV,
missing values added) .............................................................................................259 Table P12. Multivariate tests for ratings of test difficulty (SPV, missing values
added)......................................................................................................................259 Table Q1. Descriptive statistics for satisfaction ratings in Experiment 1 (VC)...............260 Table Q2. Descriptive statistics for satisfaction ratings in Experiment 2 (SPV) .............261 x
Table Q3. Frequency counts for general attitude ratings (across experiments) ...............262 Table Q4. ANOVA results for general attitude ratings (across experiments) .................262 Table Q5. Ratings for the static tutorials across experiments ..........................................263 Table Q6. Ratings for the animated tutorials across experiments ...................................263 Table Q7. Ratings for the tutorials with a recording of a teacher across experiments ....264 xi
LIST OF FIGURES Figure 1. Examples of tutorials found online.....................................................................17 Figure 2. Mayer’s cognitive theory of multimedia learning ..............................................56 Figure 3. Screenshots of the study tutorials .....................................................................101 Figure 4. Example of test items on the VC test ...............................................................107 Figure 5. Example of the test items on SPV test..............................................................108 Figure 6. Randomization scheme .....................................................................................115 Figure 7. Time gaps between study procedures ...............................................................119 Figure 8. Results for pretest–posttest comparison (VC) ..................................................134 Figure 9. Results for ratings of test difficulty (VC) .........................................................135 Figure 10. Results for ratings of self-perceived knowledge improvement (VC) .............136 Figure 11. Results for pretest–posttest comparison (SPV, correct/incorrect scoring) .....139 Figure 12. Result for pretest–posttest comparison with partial scoring (SPV)................140 Figure 13. Results for ratings of test difficulty (SPV) .....................................................142 Figure 14. Results for ratings of self-perceived knowledge improvement (SPV) ...........142 Figure 15. Ratings of tutorial: overall satisfaction (VC) .................................................149 Figure 16. Ratings of tutorial: helpful (VC) ....................................................................150 Figure 17. Ratings of tutorial: entertaining (VC).............................................................150 Figure 18. Ratings of tutorial: engaging (VC) .................................................................151 Figure 19. Ratings of tutorial: value as a learning tool (VC)...........................................151 Figure 20. Ratings of tutorial: overall satisfaction (SPV)................................................155 Figure 21. Ratings of tutorial: helpful (SPV)...................................................................155 Figure 22. Ratings of tutorial: entertaining (SPV) ...........................................................156 Figure 23. Ratings of tutorial: engaging (SPV) ...............................................................156 Figure 24. Ratings of tutorial: value as a learning tool (SPV) .........................................157 Figure 25. General opinions about the need for tutorials.................................................160 xii
Figure 26. General opinions about the use of tutorials for learning ................................160 Figure 27. Satisfaction ratings across the experiments ....................................................164 xiii
1
CHAPTER 1
INTRODUCTION
My interest in computer-based grammar tutorials started from a feeling that is
familiar to every teacher. After class is over and students start packing their bags, I
always wish there were a way for them to take all of my explanations about grammar
with them as well. A computer-based grammar tutorial seemed like the perfect tool for
just that task, namely something that could preserve my explanations in an appealing
visual form and be available outside of class. It was this idea that motivated me to create
the tutorials and make them the focus of my research.
In the regular course of events, one forms a research question, conducts an
experiment, and then receives an answer. In my case, however, I got a glimpse at a
possible outcome of my study early when, one day before the main data collection, I was
in the computer lab making sure that my tutorials worked properly. Next to me, a student
was watching a Spanish grammar tutorial on YouTube.com. I impatiently waited for his
tutorial to be over to ask him whether it was an assignment for his Spanish class. It turned
out that he liked looking up these tutorials because explanations in English helped him
understand grammar better. Encouraged, I told this young man that I was working on an
experiment to explore whether such tutorials help students learn grammar in German.
The student seemed interested. I started saying “I used PowerPoint to develop several
tutorials…” when his look changed. “PowerPoints have to go!” he interrupted and he
turned away, marking an unambiguous end to our conversation.
Hours away from my data collection, this short encounter both reassured and
disheartened me. I was happy to learn that students might find computer-based grammar
tutorials useful, but yet it was worrisome to meet such stern opposition to my choice of
tutorial format. On the whole, though, this conversation made me more enthusiastic about
2
my research topic than before. I could hardly wait to start my data collection the next day
to find answers to the questions that were whirling in my mind: Do learners like
computer-based grammar tutorials? Do these tutorials work? If they do, does their effect
depend on the mode of presentation? Would students prefer one mode over another? If
they have some preferences, what are their reasons? Would they use the tutorials if they
had access to them?
These questions cover the main research problem of the present investigation: Are
computer-based grammar tutorials effective and welcome tools to review grammar for
language learners? My interest in this topic comes from my long-held belief in the value
of explicit grammar instruction for language learning. Certainly, the idea of having
tutorials with explicit grammar explanations may seem out-dated in our era of
communicative language teaching. But opinions about grammar instruction have always
had a prominent place in any language teaching philosophy, no matter whether the
bearers of these philosophies support or oppose grammar teaching. The search for the
most effective way to learn a foreign language has been carried on for centuries and
various ideas have been at the forefront of our profession at different points in time. The
twentieth century saw some of the most vibrant methodological struggles when rigid
classical methods that considered grammar a cornerstone of learning a language have
been overturned by new approaches and methods, including various versions of the most
influential approach, communicative language teaching. Still, despite a long and diverse
search for the best method to learn a language, today we face many of the same
theoretical and methodological dichotomies as decades before: naturalistic versus
instructed language learning, fluency versus accuracy, explicit versus implicit teaching
methods, learner- versus teacher-centered classrooms, computer-assisted versus
traditional face-to-face classrooms, and so forth.
This investigation does not attempt to answer any of these questions directly or
add weight to one or another theoretical position. Instead, my goal is to look at computer-
3
based grammar tutorials as a middle ground for the controversies mentioned above. Like
many others (e.g., Beaudoin, 2004; Dodigovic, 2005; N. Ellis, 2001; Frederiksen, Donin,
& Decary, 1995; Garrett, 1987, 1995; Kaplan & Holland, 1995; Nutta, 1998, 2004;
Pusack & Otto, 1984; Skinner, 1954), I believe that if computer-based grammar tutorials
were available to learners outside of class, it would free up precious time to focus on
communication in class without compromising learning grammar in a systematic fashion.
Purpose of the study and research questions
The purpose of this study was to examine the effects of three different modes of
computer-based tutorials on learning and on learner satisfaction. Three different types of
computer-based experimental German grammar tutorials were the focus of this
investigation that aimed at examining their effects on learning and at explaining what
makes these modes both beneficial and favored by the learners:

Tutorial with static text and a voice-over narration (ST);

Tutorial with animated text and a voice-over narration (AN); and

Tutorial with a recording of a real teacher (RT).
Specifically, the study had two objectives: to compare the effects of the three
different modes on learning based on the results of learners’ performance on the grammar
tests and to inquire into learners’ satisfaction with these modes of presentation by
detailing the factors that influence the levels of satisfaction. To accomplish the first
objective, I examined the comparisons between the scores on the pre- and post-tests. For
the second objective, I analyzed participants’ satisfaction with the tutorials based on their
responses to questionnaires and during interviews.
The following research questions addressed the two-fold purpose of the present
investigation: the effects in terms of learning outcomes and the levels of satisfaction from
different modes of the tutorials.
4
Research question 1: Do the three modes of computer-based grammar tutorials
(ST, AN, and RT) produce different effects on learners’ knowledge of the target
structures (regular verb conjugation and separable-prefix verbs in German)?
This research question focuses on the effect of the study tutorials on learning
outcomes. I look at learning outcomes from multiple perspectives. The answer to this
question is based, first of all, on a comparison of the participants’ scores on the pretest to
those on the posttest for each target structure. Furthermore, I examine how the
participants rate the difficulty of the pretests and the posttests. Next, I look at how the
participants rate their knowledge improvement after working with the study tutorials. I
believe that looking at the improvement of knowledge from these perspectives provides a
more complete answer to the question about the effects of tutorials on learners’
knowledge. All data for this question are numeric and all analyses are quantitative in
nature.
Research question 2: Do learners report different satisfaction ratings following the
three modes of computer-based grammar tutorials under consideration in this study?
What factors or considerations influence the learners’ satisfaction ratings for the
tutorials?
This research question focuses on the participants’ satisfaction from working with
the tutorials and the factors that can affect these ratings. The quantitative analysis for this
research question examines the participants’ ratings of the study tutorials: the
participants’ overall satisfaction; the value of the tutorials as learning tools; how helpful,
entertaining, and engaging the tutorials were; and the participants’ preferences for one
mode of tutorial over the others. The additional qualitative analysis is based on openended questionnaire responses and interview data. This analysis lends additional insight
into the factors or considerations that guided the participants’ satisfaction ratings.
Because the main purpose of this study was the examination of the effects of three
different modes of computer-based tutorials on learning and learner satisfaction, the
5
various comparisons determined the comparative design of the study. Comparative
studies are prone to criticism for convoluted methodologies and inconsistent results
(Chapelle, 2001; Levy & Stockwell, 2006; Pederson, 1987); therefore, I designed my
study to be experimental and highly controlled. All tutorials were delivered by the same
medium; that is, a computer; all tutorials used identical content, design, and production
settings; the duration of the tutorials was almost identical; and the same teacher
performed all recordings both for voice-over narrations and video recordings. By having
an emphasis on the isolation of variables for the comparison, the methodology of this
study responded to the call for the improved design of comparative studies in language
learning (Chapelle, 2001; Levy & Stockwell, 2006) and minimized the limitations of this
study.
In short, this study was a comparative study that gathered and analyzed
experimental data to find answers to two research questions: what effects do three
different modes of computer-based grammar tutorials have 1) on learners’ knowledge and
2) on the learners’ satisfaction.
Research design
This study employed a mixed methods design that combined quantitative data
(pre- to post-test comparisons and close-ended survey questions) and qualitative data
(open-ended survey questions and interviews) to investigate the effects of three different
modes of computer-based tutorials on learning and on learners’ satisfaction.
The mixed methods design seemed to suit my research agenda the most because it
is generally believed that “there is more insight to be gained from the combination of
both qualitative and quantitative research than either form by itself” (Creswell, 2009, p.
203). Moreover, the pragmatic worldview, which is a philosophical underpinning for
mixed methods studies (Creswell, 2009), fit this particular research well because it
6
suggested drawing on theoretical assumptions from various areas of research to find
answers to the research problems. Due to the shortage of research on computer-based
grammar tutorials that could help predict the outcomes in my study, I found it necessary
to draw on different areas of research, which made the pragmatic orientation of research
my best approach.
In terms of the manner of the qualitative and quantitative data collections, this
investigation adopted concurrent explanatory strategy. According to Creswell (2009), the
primary method of this strategy guides the project and a secondary database provides a
supporting role in the procedures. In my investigation, the quantitative data were
prioritized, whereas the qualitative data were used in a supportive role to gain insight into
the opinions and experiences of the participants in this study.
The timing of the data collection was primarily concurrent because the
quantitative data (test performance and closed-end survey responses) and qualitative data
(open-ended qualitative responses) were collected simultaneously. However, there was a
subsequent step in the data collection process when additional qualitative data were
collected from a subgroup of participants during optional interviews. This sequence of
data collection does not, however, change the overall concurrent strategy because the
non-concurrent timing of the interviews was determined by reasons external to the
research methodology (time available for the interviews in my schedule as well as the
participants’ schedules).
To summarize, the study employed a mixed methods design with concurrent
collection of quantitative and qualitative data. The quantitative data were prioritized to
provide answers to the research questions, whereas the qualitative data were used to
explain the results beyond their mere numerical manifestation.
7
Overview of the study procedures
To examine the effects of three different modes of computer-based tutorials on
learning and learners’ satisfaction, the study procedures were comprised of treatments, in
the form of working with three experimental tutorials; and measurements, in the form of
pre- and post-testing and questionnaires. As mentioned above, the tutorials for each target
structure were identical in design, content, and production settings to enable valid
comparisons and isolate the effect of enhancements, such as animations and a real
teacher.
There were two parallel experiments in this study:

Experiment 1 with the three tutorials for the regular verb conjugation (VC) as
treatments;

Experiment 2 with the three tutorials for the separable-prefix verbs (SPV) as
treatments.
Experiments 1 and 2 were separate experiments; however, they followed the same
design, included the same procedures, and involved subjects from the same student
populations. Aside from a different target structure, the mode of the tutorial was the only
difference between two experiments, allowing every participant to work with two tutorial
modes out of three possible ones during the two experiments. All design considerations
and procedures described below are explained in greater detail in Chapter 3.
Subject populations
The participants for this study were recruited in fall 2010 from two subject
populations: 1) students enrolled in the first semester of German at a large public
university in the Midwest that served as the main site for the data collection and that is
referred to as Midwest University (MU) in this thesis; and 2) students enrolled in the first
8
semester of German at a smaller public university in the Midwest that was an additional
site for data collection and is referred to as Another Midwest University (AMU) in this
thesis. Both subject populations represented a highly homogeneous group of learners,
which made it possible to treat them as a unified subject population for the analysis of the
results. There were no eligibility requirements to participate in the study, and a total of
more than 100 students participated at both institutions.
Study procedures
The study procedures for both experiments spanned a period of three weeks with
a total of five meetings for the main study procedures and one additional meeting for the
subjects who volunteered to be interviewed. All main study procedures took place during
the regular class time in agreement with the course supervisors at each institution.
The first meeting served as an introduction in which I explained the study to the
participants using the information sheet approved by the Institutional Review Board at
the university that was the main site for this study (see Appendix A). During the second
and third meetings, the subjects filled out pretests that checked their knowledge of each
target structure (regular verb conjugation in Experiment 1 and separable-prefix verbs in
Experiment 2). These two meetings were in the participants’ regular classrooms. During
the fourth and fifth meetings, the subjects worked with the tutorials and completed
posttests for each target structure. After that, they filled out three questionnaires (a
demographic questionnaire, a design questionnaire, and a satisfaction questionnaire).
These two meetings were in a computer lab. During the optional fifth meeting, I
conducted one-on-one interviews with the subjects who volunteered to share their
experiences in the study in a recorded conversation. These subjects were compensated for
their time ($10 for the interview) and 17 students volunteered to be interviewed.
9
Instruments
To collect data for my research, I created a number of instruments tailored to the
needs of this experiment. First, I developed six computer-based grammar tutorials on two
structures in German grammar: regular verb conjugation (VC) and separable-prefix verbs
(SPV). For each target structure, three versions of the tutorials represented three different
modes of presentation: static text with a voice-over narration (ST), animated text with a
voice-over narration (AN), and recording of a real teacher (RT). Furthermore, I created
the pretests and posttests that checked knowledge of the target structures based on the
grammar rules explained in the tutorials. Also, I composed a number of questionnaires
for this experiment. The demographic questionnaire collected background information.
Because I developed the tutorials myself, I used the design questionnaire to determine
whether subjects were satisfied with the design and production of the tutorials. Finally,
the satisfaction questionnaire inquired into learners’ experiences from working with the
tutorials.
Rationale for target structures
The regular verb conjugation (VC) and the separable-prefix verbs (SPV) were the
target structures for this investigation for several reasons. First, both topics belong to the
core of German grammar. Learners at any institution receive instruction in these topics at
the beginning of their German language study.
Second, both regular verb conjugation and separable-prefix verbs can be
considered purely grammatical topics because the rules that apply to them do not depend
on the lexical meaning of the words or the context of the sentence. Furthermore, these
two topics belong to two different aspects of grammar: verb conjugation is an issue of
10
morphology, whereas the rules for the position of the separable-prefix verbs in the
sentence belong to syntax.
It was also important for my investigation that these topics were presented in the
same chapter of the textbooks used at the participating institutions. All learners were
introduced to both target structures at the same time at the beginning of the semester.
Therefore, by the time of the review in this study, participants had been familiar with
them for a similar period of time.
In brief, regular verb conjugation and separable-prefix verbs in German were
appropriate for this study because they allowed for fewer intervening factors in the
experiment. For example, no influence of lexical meanings or context on the application
of the rules made the assessment of the knowledge of the rules more straightforward.
Further, the time gap between the initial introductions of the topics in class and the
review occasioned by this study were the same for both structures. These parallels were
important for drawing conclusions about the progress that was achieved during the study.
Rationale for review
The study tutorials presented a review of previously familiar grammar topics to
the learners, rather than introduce new information. My decision to provide a review of
familiar information instead of presenting a new grammatical topic was based on several
considerations.
First, review seemed to be an optimal format for instruction based on the
challenges encountered during the pilot for this study conducted in fall 2009 (the pilot is
discussed in more detail in Chapter 2 as Pilot I). In the pilot study, the tutorials replaced
the classroom instruction on the days when the target structures (regular verb conjugation
and separable-prefix verbs) were scheduled to be discussed according to the syllabus.
Although most participants did not experience any trouble working with the tutorials or
11
completing the posttests, a number of participants displayed high levels of stress because
they felt that the content and the test items were significantly above their level. To
respond to this criticism and to avoid a similar disruption of the data collection
procedures during the main study, I changed the orientation of the instruction from
learning something new to reviewing familiar grammar topics.
Second, presenting a grammar review seemed warranted from a teacher’s
perspective as well. As mentioned above, the pilot study procedures took place at the
beginning of the semester when the target structures were scheduled to be introduced. To
account for the limited language experience of the learners, I presented only partial
explanations of the rules, leaving out information that was too challenging for complete
beginners. One of my content experts (an experienced German language teacher) pointed
out this incompleteness of the rule explanations and suggested that the rules for each
topic should be comprehensive to be valid. Due to the fact that the complete rules
included references to material introduced to the learners later in the semester, the
comprehensive rule explanations were possible only if working with the tutorials was
intended as a review.
Finally, it seemed that a review of both target structures would be prudent at the
elementary level, especially because these topics remain problematic even for advanced
students of German. I examined this assumption empirically by testing students enrolled
in Intermediate German I and Intermediate German II at main site (MU) in spring 2010.
The students randomly received either a test for regular verb conjugation or a test for
separable-prefix verbs. For this testing, I used the previous versions of the tests that were
very similar but not identical to the ones used in this study. In addition to completing the
test tasks, the participants rated the test difficulty on a scale from “Very easy” (1) to
“Very difficult” (5). Table 1 presents the results of the testing, suggesting that even
intermediate-level students have not completely mastered these topics and found the tests
at least somewhat challenging. Neither group of intermediate-level students reached
12
mean score above 78%, which suggests that a review of these target structures can be
useful even at the intermediate level of German.
Table 1. Results of testing with intermediate learners of German
Verb Conjugation
Separable-Prefix Verbs
Group
n
Average
score
(out of
33)
Average
%
Test
difficulty
(out of
5)
n
Average
score
(out of
36)
Average
%
Test
difficulty
(out of 5)
Intermediate
German I
(n=17)
7
23.9
72%
3.33
10
22.7
64%
4.17
Intermediate
German II
(n=20)
10
25.6
78%
3.20
10
25.2
70%
3.45
As a result of the considerations described above, the review of the familiar
grammar topics seemed as a reasonable choice for the present investigation both from the
perspective of the learners, who could benefit from reviewing difficult grammar topics
without being overwhelmed with the complexity of the material, and from the perspective
of the teachers, who supported a complete explanation of a grammar topic rather than a
partial presentation of the rules.
In summary, the study procedures and instruments for data collection were the
same for the parallel experiments with regular verb conjugation (Experiment 1) and the
separable-prefix verbs (Experiment 2) as target structures. In the course of the study, the
learners completed pretests to assess their initial knowledge of the target structures,
reviewed both structures with the help of the computer-based grammar tutorials,
completed the posttests, and responded to a number of questionnaires.
13
Need for the study
In this section of the introduction, I outline the areas in theory and practice that
signal that more research on explicit grammar instruction, in particular with computerbased grammar tutorials, is needed. The issues mentioned in this section are further
elaborated in the subsequent chapters of this dissertation.
Theoretical need
As explained above, this study adopts a pragmatic research orientation to find an
answer to the research problem of whether computer-based grammar tutorials could be
effective and welcome tools to review grammar. The tutorials in this study represent
grammar instruction that is explicit, deductive, decontextualized, computer-based, and
enhanced with such elements as animations and a recording of a real teacher. Because
these characteristics, as well as the interplay between them, could influence the outcomes
of the study, it was problematic to find a single theoretical framework that could predict
the results. Therefore, I examine several areas of research to establish the theoretical
basis for my study.
The grammar instructions presented in the study tutorials fall within the realm of
explicit grammar instruction. At first glance, the overall orientation of current research
supports explicit grammar instruction. Thus, research on educational gains from explicit
grammar instruction consistently demonstrates that various kinds of form-focused
activities can improve accuracy and speed up the language acquisition processes (e.g.,
Chaudron, 1988; R. Ellis, 2001, 2002a, 2002b; Hinkel & Fotos, 2002; Larsen-Freeman,
2003; Leow, 2007; Nassaji & Fotos, 2004; Norris & Ortega, 2001; Richards, 2002;
Robinson, 1996; Rosa & Leow, 2004). However, most research of this kind deals with
the type of explicit grammar instruction that is integrated within a communicative
14
language teaching approach with a primary focus on meaning. Thus, the theoretical
support for explicit grammar instruction cannot automatically extend to methods of
explicit grammar teaching that are taken out of the context of communication, as was the
case in this study.
Another important feature of the tutorials that is rarely discussed in the literature
is the deductive character of rule explanations. Comparative studies that contrasted
deductive (when the rules are presented to the learners) and inductive (when learners
generate rules from examples) were popular in the 1970s and 1980s. At that time, studies
favored the deductive manner of grammar presentation (Krashen & Seliger, 1975; Levin,
1972; Seliger, 1975). After the advent of communicative language teaching approaches,
deductive grammar teaching was regarded as outdated. Recently, Haight, Herron, and
Cole (2007) noted that although the issue of whether grammar should be taught
deductively or inductively is still one of the unanswered questions on the subject of
effective language learning, there are few studies conducted on this topic and those that
exist demonstrate mixed results.
As noted previously, the supporters of explicit grammar instruction agree that
grammar activities should be integrated without interfering with the overall
communicative orientation of the language teaching. However, some researchers note
that there is still no consensus on how exactly grammar activities should be integrated
within communicative classrooms (Katz & Blyth, 2009; Williams, 1995). Typically, the
integration of grammar instruction happens through incidental focus-on-form activities
that are provided to learners when a communication breakdown occurs. These activities
range from a short recast of the erroneous utterance to a lengthier metalinguistic
explanation. Recently, though, the whole concept of integrating grammar within the
overarching focus on meaning has been questioned for various reasons. For instance,
VanPatten (1989, 2002) argued that a dual focus both on meaning and form is difficult
for most learners. Other researchers pointed out some disadvantages of the incidental
15
focus on form: It does not provide systematic explanations (R. Ellis, 2002; Lyster, 2007),
and it often creates misunderstandings in the classroom discourse (Toth, 2004). But
although voices are being raised for exploring ways to overcome these disadvantages by
the means of more isolated decontextualized grammar explanations (DeKeyser, 1998; R.
Ellis, 2002; Spada & Lightbown, 2008; Toth, 2004), there are no studies that examined
these contrasts directly.
The next important feature of the tutorials in this study is that they are computerbased applications for teaching grammar. As in mainstream language teaching, in the last
decades the focus of early CALL projects on grammar has changed toward a more
communicative orientation giving more exploration to the role of computer as a tool,
rather than a tutor. However, similar to face-to-face language learning, communicative
CALL has been criticized for failing to provide opportunities for more cognitive grammar
learning (Fredricksen, Donin, & DeAcary, 1995; Garrett, 2009; Nutta, 2004). Many see
that CALL needs to be reconceptualized to explore the potential of computer-based
tutorials for teaching grammar (Garrett, 2009; Gilby, 1996; Hubbard & Siskin, 2004;
Nutta, 2004). With regard to the information available in the research literature on
grammar tutorials, Hubbard and Siskin (2004) point to the lack of such publications and
call for more research arguing that tutorial CALL is “a viable part of CALL and deserves
serious attention rather than summary dismissal” (p. 459).
Finally, the tutorials in this study include two types of enhancements, animations
and a recording of a real teacher. The idea to enhance grammar presentations by
animations is not new. When Garrett (2009) argued that research should revisit the
advantages of computer-based grammar tutorials and extend their appearance beyond
conventional methods, she recommended adding animations to provide dynamic
illustrations of grammatical relations. Further, Caplan (2002) noted that computer-based
tutorials should not resemble a workbook, but rather utilize such instruments as
animations to enhance the effectiveness of grammar explanations. However, the research
16
on enhancing grammar explanations with animations shows mixed results (Caplan, 2002;
Roche & Scheller, 2008). Similarly, although the enhancement in the form of a human, or
human-like, characters is believed to awaken learners’ social responses and, as a result,
increase motivation and subsequent learning, the existing studies did not find consistent
support for such an assumption (Mayer, 2005; Moreno, Mayer, Spires, & Lester, 2001;
Park & Catrambone, 2007).
In short, there is a great theoretical need in the current research to empirically
investigate computer-based grammar tutorials that represent isolated modules with
systematic rule explanations. In the time when communicative teaching faces its
shortcomings and the grammar is being “retrieved from the naughty corner” (Cajkler &
Addelman, 2000, p. 128), research needs to be carried out to empirically examine
whether such tutorials can be effective.
Practical need
Despite the lack of experiments with computer-based grammar tutorials in
research, an abundance of grammar tutorials of various kinds is evident online. A simple
search for ‘regular verb conjugation in German’ and ‘separable-prefix verbs in German’
on YouTube.com returns many hits. Although the quality of tutorials on YouTube varies,
both in terms of production and provided explanations, it is obvious that users continue to
create and watch them. Many positive comments that the videos receive from users and
the number of hits, which often count tens of thousands instances of use, clearly indicate
a need for such learning tools.
The available tutorials fall into two broad categories, the ones that simulate
classroom learning with a real teacher who explains a grammatical topic in the recording
(see Fig. 1a) and the ones that utilize software like Microsoft Office to created record
narrations (see Fig. 1b). Some tutorials even include simple text animations to enhance
17
he salience of
o the importtant aspects (see
(
Fig. 1c)). In view off all that, therre seems to bbe a
th
trrend in which
h the producction of gram
mmar tutoriaals increases in response to the learneers’
needs but the research is lacking
l
guid
delines as to whether succh tutorials aare in fact
efffective and as to what comprises
c
an
n effective coomputer-bassed grammarr tutorial. Thhe
reesults of my study offer some insigh
hts into how tthese questioons might bee answered.
a
b
c
Figure
F
1. Exa
amples of tu
utorials found online.
(a) Exam
mples of a tuto
orial with a recording
r
off a real teachher
(b) Exam
mple of a tuto
orial with a reecorded Miccrosoft Officce documentt
(c) Exam
mple of a tuto
orial with aniimations
Source: www
w.youtube.co
om
From an even morre practical perspective,
p
my study offfers a discuussion on whhether
co
omputer-bassed tutorials that are morre time consuuming to creeate offer biggger rewardss in
teerms of learn
ning or satisffaction. Creaating even a simplest forrm of a tutorrial, such as a
voice-over naarration for a set of slides, requires hhours of workk by the teaccher. As the
18
YouTube search showed, many tutorials go beyond the simplest form offering text
animations or recordings of a real person. Such enhancements can translate into hours of
additional work. Thus, the practical question that I attempt to answer with this research is
whether it is worth the additional effort. For this study, I created all of the tutorials
myself, tracking the time committed to each of them to have some concrete grounds for
answering this question.
On a larger scale, my research also offers a discussion of whether computer-based
grammar tutorials are valid instruments for language learning. This discussion could be
of value particularly nowadays when rapid development of new technologies coupled
with the growing burden of financial constraints on institutions led to an increase in
online and hybrid course offerings (Allen & Seaman, 2006). The field of language
learning is not an exception to the overall trend of integrating more computer-based
instruction into the curriculum. For language courses, switching to the online mode in
part or completely can help increase enrollments and reduce regular class size (Sanders,
2005). Given that computer-based grammar tutorials could be an option in learning tools
for hybrid or online courses, both educators and administrators should be interested in
whether such tutorials could indeed be a valuable asset for language learning.
Overview of the chapters
This chapter has introduced the study by defining its context in the current
educational setting. After an overview of the study purpose and the methodological
solutions to achieve this purpose, a short discussion of the pertinent theoretical
background was provided to outline the need for this investigation. I concluded by
offering a reflection on potential practical value of this research.
The literature review (Chapter 2) presents an overview of research relevant for the
present investigation. The discussion starts by describing the main theoretical disputes
19
about the necessity of grammar teaching, and more specifically explicit grammar
teaching. Then, I discuss the merits of teaching grammar by means of isolated deductive
rule explanations. In the next section of the literature review, I focus on the research on
grammar in CALL to discuss why computers could be ideally suited to teach grammar to
language learners. Further, I refer to the area of instructional design to find practical
guidelines for creating good-quality instructional presentations. The review continues by
discussing the options of enhancing instructional presentations, particularly with
animations or human characters. Finally, I review several existing studies, including the
pilot investigation for this study that addressed conditions for grammar instruction similar
to the ones in the current investigation.
The methodology chapter (Chapter 3) describes in detail the study procedures for
the present investigation. The chapter presents an overview of the study design, including
participants, settings, target structures, instruments, and data collection procedures.
The results chapter (Chapter 4) starts by presenting the results of the quantitative
analysis of the data for each research question. After that, additional insights on the
results are offered based on the qualitative analysis of the participants’ responses to
questionnaires and interviews.
The discussion chapter (Chapter 5) concludes the study by drawing parallels
between the findings of this investigation and previous research. This chapter also
discusses the theoretical and practical value of the study. Finally, this chapter describes
the limitations of the study and recommends avenues for future research.
Glossary of terms
This section provides definitions of abbreviations for both general and studyspecific terms that appear throughout this thesis.
20
General terminology

CALL: Computer-assisted language learning

CLT: Communicative language teaching

CMC: Computer-mediated communication

FFI: Form-focused instruction

SLA: Second language acquisition
Study-specific terminology

ST: tutorial with a static text and a voice-over narration

AN: tutorial with an animated text and a voice-over narration

RT: tutorial with static text and a recording of a real teacher

VC: regular verb conjugation

SPV: separable-prefix verbs

MU: the large public university in the Midwest where the study
procedures took place (main site for data collection)

AMU: another public university in the Midwest where the study
procedures took place (additional site for data collection)
21
CHAPTER 2
LITERATURE REVIEW
This chapter presents a review of the literature to describe the theoretical basis of
the present investigation. As mentioned in the previous chapter, my study adopts a multitheoretical approach to draw on different areas of research to answer a broad question of
whether computer-based tutorials are effective and welcome tools to review grammar for
language learners. I decided to bring several areas of research into the discussion because
no single theoretical framework could provide a complete understanding of the potential
usefulness and appeal of such tutorials. Thus, I decided to explore several areas of
research to demonstrate that computer-based grammar tutorials are both valid and needed
instruments for language learning and that educators should reopen the neglected
research agenda on the benefits of such tutorials.
Because grammar instruction is at the heart of this study, I start my exploration
with a general question on whether grammar should be taught formally. The discussion
proceeds by examining what we know about the potential benefits of deductive grammar
explanations that are separated from meaning-focused activities. Next, the focus of
discussion turns to the field of computer-assisted language learning (CALL) to review
arguments in favor of using computers in grammar teaching in general and also to
describe the current position of tutorials among CALL applications. After that, I refer to
the area of instructional design to delineate practical guidelines for assuring the high
quality of the tutorials. The next issue under consideration is whether the quality and the
efficiency of the tutorials could be increased by enhancing them with animations or a
recording of a real teacher. Finally, I review previous research, including the results of
the pilot experiment for this study.
22
Views on formal grammar instruction
This section reviews the literature from the areas of language learning, second
language acquisition, and linguistics to provide theoretical grounding in favor of formal
grammar instruction in general. Specifically, I draw on the research to explore the role
that formal, or explicit, conscious grammar instruction, as opposed to implicit grammar
acquisition, plays in language learning. It should be underscored that in this section the
term explicit is used as a synonym for formal, conscious, instructed and an antonym for
implicit grammar learning, rather than as a reference to traditional rule- or drill-based
grammar teaching.
Theoretical perspective
The question of whether or not grammar instruction is essential to language
learning has been an issue of decades-long debate in academia and education. From the
theoretical perspective, the issue of the viability of grammar instruction belongs to the
debate about the nature of second language acquisition and what role conscious learning
plays in the acquisition process. Although this debate was present in theoretical
discussions for centuries before, it escalated after Krashen (1981, 1985) proposed a clear
division between the processes of language acquisition and language learning. Explaining
the nature of language ability, Krashen claimed language acquisition was a subconscious
process of picking up a language, whereas language learning was a conscious process
with learned knowledge represented consciously in the brain. By drawing analogies
between first language acquisition by children and second language acquisition by adults,
Krashen (1981, 1985, 1992, 2002) argued that once meaning was inferred from
comprehensible input, the grammar embedded in the comprehended utterance would be
acquired automatically. In Krashen’s view, grammar instruction, being part of conscious
23
language learning, has a limited function because we acquire another language by
understanding it in an effortless and voluntary way.
Krashen’s rigid denial of the connections between acquisition and learning for
developing communicative language ability resulted in theoretical debates about whether
or not an interface exists that connects these two processes. The viewpoint proposed by
Krashen became known as a non-interface position. As explained above, this position
rejected any interface between subconscious (or implicit) language acquisition and
conscious (or explicit) language learning and regarded them as independent mechanisms.
Later, proponents of the non-interface position amended its main assumptions by stating
that explicit knowledge could be used as a resource when implicit knowledge was not yet
available. At the time, however, the non-interface position met widespread resistance,
resulting in various approaches to explain how implicit and explicit processes of language
acquisition were interconnected. These explanations fell within two main positions: the
weak-interface position that assumed an indirect relationship between implicit and
explicit learning, and the strong-interface position that believed in the direct connection
between the two.
The support of the weak-interface position mostly comes from connectionist
theories of second language acquisition, which argue for an indirect connection between
implicit and explicit knowledge. Connectionists (Bybee, 2008; N. Ellis, 2003, 2008)
emphasize that both first and second language acquisition proceed implicitly when rules
emerge from the input. However, second language acquisition differs from first language
acquisition due to its limited capabilities. According to Ellis (N. Ellis, 2008), the
limitations of second language acquisition can stem from factors specific to the second
language (e.g., a possibility of language transfer from the mother tongue or the
availability of naturalistic settings), and non-specific to the second language (e.g.,
frequency, saliency and contingency of the forms). But the limitations of second language
acquisition that prevent a second language learner from attaining a native-like level in the
24
second language could be overcome by explicit grammar instruction that could speed up
second language acquisition (N. Ellis, 2005, 2008).
For the supporters of the strong-interface position, the connection between
implicit and explicit learning in second language acquisition is direct, meaning that
explicit knowledge could become implicit through practice. This view is supported by
theoretical and empirical work of DeKeyser (1993, 1997, 2007), who considers language
to be a skill like any other. According to the strong-interface position, language learning
proceeds from explicit to implicit knowledge by the proceduralization (or automatization)
of the skill through practice: first, “an explicit jumpstart” (DeKeyser & Juffs, 2005, p.
442) in the form of declarative information helps develop an initial mental representation,
and then systematic and deliberate practice transforms this conscious declarative
knowledge into the implicit grammar representations necessary for effortless language
production.
The interface debate inspired many theoretical arguments in an attempt to
understand the relationship between the implicit and explicit learning. However, as
Dörnyei (2009) said, “with hindsight we can admit that contrasting the non-, weak, and
strong interface positions had only limited use in furthering the field because a great deal
of the debate remained at a very general—philosophical and terminological level” (p.
167). Similarly, DeKeyser and Juffs (2005) implied that the focus of the debate should be
redirected from comparing the usefulness of implicit versus implicit knowledge to
finding ways to maximize the benefits of explicit learning. That call prompted educators
to turn their attention to a more practical avenue of research: the empirical investigation
into benefits from implicit and explicit methods of language learning.
25
Perspectives from teaching and research
In practice, the field of language learning has long been primarily oriented to
approaches that support implicit language learning. In the 1980s, for instance, Krashen’s
view on second language acquisition became the theoretical foundation for the Natural
Approach to second language teaching (Krashen & Terrell, 1983). The main assumption
of the Natural Approach was that second language acquisition was similar to the
effortless first language acquisition: Language mastery occurs implicitly through the
understanding of meaningful input without any explicit grammar instruction. With regard
to the denial of explicit grammar teaching, the Natural Approach is similar to other
communicative approaches that had started to emerge in the late 1960s. These approaches
emphasized the acquisition of the language through communication and by and large
replaced grammar learning by fluency and interactive collaborative activities. Appealing
to both educators and learners, communicative language teaching (CLT) approaches have
positioned themselves at the forefront of language teaching for several decades, starting
what Richards (2002) called “the demise of grammar-teaching instruction” (p. 35).
But despite the wide dissemination of the CLT teaching philosophy and the
application of its principles to various language teaching approaches, naturalistic methods
of teaching met strong resistance in both theory and practice. First, proposals that the
principles of acquiring the first language could equally apply to the second language were
contradicted by the obvious fact that the overwhelming majority of second language
learners had trouble achieving native-like proficiency. To many, that seemed to have a
simple broad explanation: Second language acquisition does not resemble first language
acquisition (Dörnyei, 2009; N. Ellis, 2003, 2008; R. Ellis, 2002; Garrett, 1995; LarsenFreeman, 2003; Schmidt, 1990). As to why it happens, though, scholars had different
explanations, such as the maturational constraints (Long, 1990), the switch from an open
26
awareness to controlled attention (Schmidt, 1990), the quality of input or the first
language transfer (N. Ellis, 2008), among others.
Opposition to the pure communicative approaches that denied the value of
conscious learning resulted in proposals that language teaching should include explicit
components (DeKeyser, 2005; DeKeyser & Juffs, 2005; Dörnyei, 2009; N. Ellis, 2005;
Larsen-Freeman, 2003; Lightbown & Spada, 2006). Larsen-Freeman (2003) called a
direct transfer of the principles of the naturalistic language acquisition to classroom
instruction “the reflex fallacy” (p. 20), arguing that the latter should not emulate the
former. Similar to other scholars (Chaudron, 1988; Dörnyei, 2009; N. Ellis, 2008; R.
Ellis, 2002), Larsen-Freeman (2003) argued that the purpose of formal instruction was to
accelerate natural learning and help learners achieve higher levels of accuracy. Indeed,
many have noted that the success of the naturalistic approaches in promoting fluency was
coupled with the failure to achieve adequate levels of accuracy in L2 (Higgs & Clifford,
1982; Hinkel & Fotos, 2002; Long, 1996; Richards, 2002; Skehan, 1996).
Empirical evidence for the failure of naturalistic approaches to promote higher
levels of accuracy comes from studies that compared implicit and explicit methods of
teaching, research on immersion learners, and investigations into study abroad
experiences. Thus, research that compared these methods argued in favor of explicit
language teaching by demonstrating its significant advantages for increasing learners’
accuracy in the second language in college language classrooms (e.g., Chaudron, 1991;
R. Ellis, 1998; 2002; Norris & Ortega, 2000, 2001). For instance, in a renowned metaanalysis based on 49 different studies, Norris and Ortega (2000) investigated a number of
research questions, including the relative effectiveness of different types of second
language instruction. The researchers concluded that “treatments involving an explicit
focus on the rule-governed nature of L2 structures are more effective than treatments that
do not include such a focus” (Norris & Ortega, 2000, p. 483), thus suggesting that
instructed learning was superior to meaning-focused naturalistic approaches.
27
It is natural to suggest that the benefits of naturalistic learning for both fluency
and accuracy will be more evident when there are plentiful opportunities for exposure.
However, the research on the most naturalistic learning settings—language immersion
and study–abroad settings—discovered a similar trend of insufficient accuracy in the
learners’ second language production (Doughty, 2003; Lightbown & Spada, 2006; Ranta
& Lyster, 2007; Swain & Lapkin, 1989). In a similar vein, a recent study by DeKeyser
(2010) on study abroad experiences demonstrated that learners failed to fully acquire the
language even in the naturalistic settings of the second language environment. During a
study abroad term in Argentina with his students, DeKeyser (2010) found that learners
who had the most explicit knowledge achieved the most progress in contrast to other
learners, whose inadequate knowledge of grammar made them constantly “[reinvent] the
elementary grammar wheel and [avoid] practice opportunities with the natives because
they were too painful” (DeKeyser, 2010, p. 89).
In short, although formal grammar instruction has fallen out of favor after being
pushed out by communicative language teaching approaches, it never completely
disappeared. In contrast, the rigidity of the theoretical perspectives that denied the role of
conscious learning and the lack of expected accuracy gains in the classrooms prompted
theory and research to seek support in favor of more formal learning.
Learners’ perspective
The discussion above suggests that support for explicit grammar instruction came
both from theory, when scholars argued that first and second language acquisition
followed different routes, and practice, when studies of the classroom instruction and
immersion settings found the lack of explicit teaching to be related to lower levels of
accuracy. This section introduces another powerful factor in support of explicit grammar
teaching: the learners.
28
Most studies in the area of second language acquisition involve language learners
at the college level. For these learners, age is a factor in favor of explicit instruction. As
discussed above, various proposals have been advanced in academic discussions to
explain the reasons for the child–adult differences in language acquisition. The
propositions that argued for maturational constraints as a possible reason became known
as the Critical Period Hypothesis (CPH). There are different versions of the CPH in the
scholarly research (for an overview see Long, 1990), but they mainly claimed that
children’s remarkable capabilities for implicit language learning weakened after puberty.
The CPH claims were widely supported in the field of second language acquisition and
language learning by researchers and educators who argued that due to the limitations in
implicit language capacities adults needed to rely more on explicit learning and
declarative knowledge (DeKeyser & Juffs, 2005; Dörnyei, 2009; Doughty, 2003; R. Ellis,
2002; Ullman, 2005).
But the biological factor of age is not the only reason that explicit grammar
teaching could be beneficial for post-adolescent language learners. Students seem to
believe in the value of explicit grammar learning (Bade, 2008; Hinkel & Fotos, 2002;
Jarvis & Szymczyk, 2010; Katz & Blyth, 2009; Larsen-Freeman, 2003; Loewen, Li, Fei,
Thompson, Nakatsukasa, Ahn & Chen, 2009; Luke, 2006; Schulz, 2001). In two wellknown studies on student and teacher perceptions of the role of explicit grammar
instruction, Schulz (1996, 2001) revealed that, to their teachers’ surprise, students
generally favored grammar instruction. It should be noted, though, that Schulz (2001)
compared the beliefs of U.S. students to those of Colombian students and found that the
latter clearly had stronger beliefs about the role of explicit grammar instruction.
Similarly, some research that reported learners’ strong preferences toward grammar
instruction was based on studies with non-U.S. students (Bade, 2008; Borg, 1999; Hinkel
& Fotos, 2002; R. Ellis, 2002; Jarvis & Szymczyk, 2010; Tsui, 2003). Nevertheless, the
fact that U.S. college students also favor explicit grammar instruction is hard to ignore
29
with evidence coming from teachers’ perceptions in the classrooms (Larsen-Freeman,
2003; Katz & Blyth, 2009) and from experimental studies (Brown, 2009; Loewen at al.,
2009; Schulz, 1996, 2001).
Evidence that demonstrates why grammar instruction is supported by language
learners comes from the research on anxiety in language classrooms. Several studies that
investigated anxiety clearly linked poor grammar knowledge as well as lacking or
inadequate grammar instruction to higher levels of stress and anxiety reported by
language learners (Casado & Dereshiwsky, 2001; Ewald, 2007; Koch & Terrell, 1991).
Definitely, insufficient grammar knowledge or instruction can be considered as only one
among other numerous sources that can influence anxiety in language classrooms
(Horwitz, 2001), but these studies demonstrated that learners expected grammar to be
taught and that they associated better achievement with robust grammar knowledge.
In summary, teaching grammar explicitly makes sense not only from the
perspective of research and practice, but also from the point of view of the major players
in language learning—the learners. Certainly, the individual differences in learning styles
will assure variety in learners’ beliefs and expectations, but the research argues that adult
language learners should profit from more conscious learning. Indeed, many learners
support this view and consider grammar to be one of the cornerstones of language
learning and a prerequisite for their success in mastering a foreign language.
Summary
This section of the literature review has dealt with a broad question of whether
formal, or explicit, grammar learning should be part of language teaching. As can be seen
from the discussion, there are perspectives in research that support and those that deny
the merits of grammar instruction. In theory, more voices are raised in favor of at least
some connection between the implicit and explicit knowledge suggesting that conscious
30
language learning could be contributing to the development of the implicit language
representations in learners’ mind. In practice, however, the domination of meaningoriented teaching methods left grammar instruction mostly on the sidelines. Nevertheless,
the observed shortcomings of communicative approaches in promoting greater accuracy
prompted educators to revisit the merits of explicit grammar teaching. Support for formal
grammar instruction also comes from the perspective of learners, who often associate the
knowledge of grammar with success in language learning.
Methods in explicit grammar instruction
The previous sections reviewed support in favor of formal, or explicit, grammar
instruction in general. But this support of formal grammar instruction does not
automatically extend to such methods of explicit grammar teaching as rule-based
deductive grammar explanations, which are the focus of this study. Thus, this section
continues the main discussion by delving into the issue of the feasibility of grammar
teaching by means of rule-based explanations.
Form-focused instruction
In the previous section, I described the weak- and strong-interface positions as
theoretical views that opposed the non-interface position, which argued against the
importance of conscious learning for language acquisition. As follows from the
discussion, the opposition of these views was mostly a theoretical debate. But at the same
time, new instructional methods emerged in educational practice as an alternative to
meaning-focused approaches. The new methods evolved under the umbrella of formfocused instruction (FFI), which R. Ellis (2001) defined as “any planned or incidental
instructional activity that is intended to induce language learners to pay attention to
linguistic form” (pp. 1–2).
31
The beginnings of the form-focused instruction are associated with Long, one of
the leading SLA researchers, whose research focused on interaction in the second
language. Long (1988) pointed out that in communicative language classrooms an
occasional shift of attention from the meaning to form occurs when communication
breaks down. Based on this observation, Long (1988; 1991) argued for an approach to
language teaching when information about grammatical forms is provided to learners in
the exact moment when they perceive problems in their communication and production.
Long (1991) distinguished between his approach, which he named “focus on form” (p.
46), from the traditional grammar teaching that involved decontextualized rule
explanations, which he named “focus on forms.” Long (1991) strongly argued for the
benefits of the focus-on-form approach which maintains the overriding focus on meaning
by using a variety of techniques, which range from repeating the learner’s utterance with
a questioning intonation to providing explicit information about a problematic grammar
topic.
Long’s idea that focus on grammar should occur either to prevent or ameliorate
communication breakdowns became widely accepted in the field of second language
acquisition and language education. The overwhelming majority of theories and
approaches in the form-focused instruction support the contextualized grammar
instruction through incidental focus-on-form techniques.
Isolated1 grammar instruction
Although incidental focus on grammar, which is characteristic of the focus-onform approaches, became a successful solution to the problem of low accuracy
1 The term isolated is used in this thesis as a synonym to “separated, decontextualized”
and means that grammar instruction is separated from the meaning-focused activities in language
classrooms rather than integrated.
32
achievement (Ellis, Basturkmen, & Loewen, 2001; Mackey, 1999; Lightbown & Spada,
1990; Norris & Ortega, 2000), the task of integrating grammar within the context of
meaningful interaction still remained challenging. Some argued that the dual focus on
accuracy and meaning could not be achieved in a balanced way (Carroll, 2001; Toth,
2004; VanPatten, 1989, 1996, 2002) because the prevalent emphasis on communication
would prevent learners from identifying correctly when the teacher switches the focus
from meaning to form. The research demonstrated that the learners’ failure to follow the
teachers’ intention and notice the information about the grammatical forms not only
reduced the expected gains from such interventions but also created misunderstandings
(Carroll, 2001; Hall, 1995; Mackey, Gass & McDonough, 2000). For example, in a study
that compared entirely meaning-focused instruction to one with incidental focus on form,
Toth (2004) found that the discourse in the second type of instruction proved to be less
cohesive and that learners experienced more difficulties in understanding the teacher.
Based on the results of this study, Toth (2004) concluded that “recognizing clearly when
accuracy is the discourse purpose would greatly facilitate the noticing of targeted forms”
(p. 27). Thus, it seems that the problems created by the shifting of focus from meaning to
grammar in a communicative classroom could be helped by decontextualizing grammar
instruction.
Another problem with incidental focus on form is that it cannot provide a
systematic coverage of grammar due to its spontaneous nature (Lyster, 2007). But
grammar is a system; therefore, it is important that learners have a thorough
understanding of how this system works (DeKeyser, 1998, 2007; R. Ellis, 2002a, 2002b;
Garrett, 1995; Hudson & Walmsley, 2005; Lyster, 2007; Vygotsky, 2001). Although
learners can learn to use correct grammatical constructions in their speech without
formally learning the underlying rules, language use of this type is fragmented and
unstructured and, therefore, cannot provide grounds for a systematic understanding,
which is the basis for creative language production (Vygotsky, 2001). In her call for more
33
systematicity in grammar instruction, Garrett (1995) also argued for more balance
between meaning- and form-focused activities to give learners “a principled
understanding of the relationship between communication and grammar” (p. 351). In this
case, isolating grammar instruction from meaning-focused activities could help learners
understand the system within the language.
As for suggestions on how to integrate form-focused instruction into
communicative classrooms most efficiently, R. Ellis (2002a) proposed two basic options
to achieve the desired balance. One option was the integration of grammar in the form of
“focused communicative tasks” or feedback (R. Ellis, 2002a, p. 24), which is consistent
with incidental focus on form discussed earlier. The other option was the parallel
approach when meaning and form represented entirely separate components (R. Ellis,
2002a). According to this idea, the grammar component consists of a list of grammatical
structures to be taught systematically. R. Ellis (2002a) described his own idea as
something that would “[fly] in the face” (p. 26) of generally accepted language teaching
practice that promotes the integration of skills, rather than their separation. He explained,
however, that grammar could be taught explicitly in a similar manner as historical dates
in history or formulas in math classes, and the goal of a grammar syllabus would be to
help learners understand how the system of grammar works. Thus, it seems that taking
the grammar out of context and presenting it in separate modules could help overcome
the fragmented nature of grammar learning within communicative discourse and
contribute to systematicity in building grammatical knowledge. Recently, Spada and
Lightbown (2008) discussed the advantages and disadvantages of integrated versus
isolated techniques of form-focused instruction. Specifically, they argued that although
incidental instruction allows students to acquire grammar while focusing on meaning,
isolated grammar instruction can be essential for the acquisition of those language
features that are hard to perceive in the stream of speech, that are misleadingly similar to
34
the native language of the learners, or that are likely to cause a communication
breakdown.
To sum up, there is enough support in the current research to state that isolated, or
decontextualized, grammar instruction can be a justifiable method in language teaching.
It seems plausible to argue that keeping grammar instruction separate from the meaningfocused activities can minimize misunderstandings between the teacher and the learner.
Furthermore, isolating grammar instruction allows for a more systematic coverage of
rules and that is critical for enhancing the learner’s understanding of the system of
grammar.
Deductive grammar instruction
In the previous section, the discussion revealed support in favor of isolated, or
decontextualized, grammar instruction that can enhance language teaching in
communicative classrooms thanks to the systematic representation that does not interfere
with a focus on meaning. Once established that keeping grammar instruction separate can
be beneficial for learning, the next issue to decide is whether rule explanations should be
inductive, in which learners understand the grammar rules from examples, or deductive,
in which the teacher provides an explicit statement of the rule and learners apply it to
examples (Gollin, 1985).
In 25 Centuries of Language Teaching, Kelly (1969) traced the history of the
inductive/deductive dichotomy that existed from the very beginning of foreign language
teaching “but never on an equal footing” (p. 34). To illustrate changes in the dynamic of
this dichotomy in the history of language teaching, Kelly (1969) provided a diagram that
is reprinted in Table 2.
This diagram demonstrates that “where grammar was approached through logic,
the range of methods was reduced to teaching rules; but where inductive approaches were
35
used, the deductive did not necessarily disappeared” (Kelly, 1969, p. 59). Thus, deductive
methods of language teaching fell out of favor throughout the history of language
teaching yet they continued to be used by practicing teachers. Examining the appeal of
deductive approaches in various time periods, Kelly (1969) observed that explanation of
rules was a logical choice when the time of instruction and accuracy were concerned.
Table 2. History of changes in grammar teaching
Era
Teaching
Inductive
Deductive
Classical
Medieval
Renaissance
18th/19
19th/20
X
X
X
X
X
X
X
X
Language Analysis
Grammar
Grammar, Grammatica Speculativa
Grammar
General Grammar
Linguistics, Grammar
Source: Kelly, L. G. (1969). 25 centuries of language teaching. Rowley, MS: Newbury
House Publishers.
Although deductive grammar teaching persevered through all theoretical and
practical challenges in its history, there is hardly any evidence in the recent academic
second language acquisition discussions of the merits of this method. This topic,
however, was popular in academic discussions in the 1970s, when studies demonstrated
the general efficacy of deductive rather than inductive methods for adults (Krashen &
Seliger, 1975; Levin, 1972; Seliger, 1975). At that time, some evidence in support of rule
explanations came even from the most unexpected person—Stephen Krashen—who
vehemently denied any role of explicit teaching for language acquisition yet agreed that it
could have an effect on language learning, which he considered to be separate processes
36
(Krashen, 1981, 1985). When Krashen argued for the separate processes of language
acquisition and language learning, he placed the major emphasis on the acquisition,
leaving conscious learning on the side and limiting its use to monitoring, or editing,
purposes (Krashen, 1985, 2003). Describing the optimal monitoring activities in The
Natural Approach, Krashen and Terrell (1983) argued for short and simple rule
explanations to be conducted outside of the class to increase students’ explicit knowledge
of grammar.
Although the trend of discovering statistically significant advantages of deductive
approaches continued in later studies (Erlam, 2003; Rosa & Leow, 2004; Rosa & O’Neill,
1999), the results were mixed. Some studies found that the advantages of the deductive
approaches depended on specific conditions. Thus, in Seliger (1975), significant
differences in favor of the deductive approach were found on the retention tests, but not
on the recall tests. In Sheen’s (1992) study, significant differences in favor of the
deductive approach were evident in oral tests only, although the overall trend was similar
in written tests. Abraham (1985) discovered that the advantages of the deductive method
were statistically significant only for field-independent, or analytical, learners. Although
these studies provided at least some support in favor of deductive approach, other studies
either found no differences between deductive and inductive teaching (Schaffer, 1989) or
demonstrated the significant advantages of the inductive approach (Haight, Herron, &
Cole, 2007; Herron & Tomasello, 1992).
In summary, the research does not offer an unambiguous answer whether the
approach to teaching rules should be deductive or inductive but it certainly provides
evidence to the methodological validity of both approaches. Undoubtedly, inductive
approaches can be and perhaps are more advantageous because they involve learners
actively in the construction of the rules (Haight et al., 2007; Herron & Tomasello, 1992;
Schaffer, 1989). But in the studies discussed above, several researchers noted one clear
37
advantage of the deductive activities: They save time (Gollin, 1985; Kelly, 1969; Sheen,
1992).
Summary
This section dealt with the perspectives in research and practice on the value of
explicit rule-based deductive grammar instruction that is separate from meaning-focused
activities in the classroom. Although it is early to state that this approach to explicit
grammar teaching is beneficial for language learning due to the lack of studies that
focused specifically on that issue, some educators believe that reintroducing isolated
grammar instruction into language teaching could help provide the systematic coverage
necessary for conscious language learning.
Grammar and CALL
In the previous section, I presented evidence to support the validity of grammar
instruction in the form of deductive isolated rule explanations. Now, it is time to decide
the best way to implement such an approach. Since computer-based tutorials appeal to me
as an option to teach grammar explicitly, in this section I explore the theoretical
background by focusing on three issues: (a) the role of computers for grammar
instruction, (b) teachers’ attitudes to computer-based grammar instruction, and (c)
learners’ perspective on learning with computers.
Computers and grammar instruction
Traditionally, computers have been considered a good fit for grammar instruction
(Dodigovic, 2005; Fredericksen et al., 1995; Garrett, 1995, 2009; Gilby, 1996; Hubbard
& Siskin, 2004; Levy, 1997; Levy & Stockwell, 2006; Nutta, 2004; Pusack & Otto, 1984;
38
Skinner, 1954). Computers by their nature seem ideally suited for presenting and
practicing knowledge that could be structured in a series of interconnected steps. This
idea became the basis for the programmed instruction, proposed and advocated by
Skinner (1954), who argued that by programming instruction to teach knowledge in welldefined sequences with hints, prompts, and immediate positive reinforcement one could
create a successful new system that supported creativity and productive thinking through
manipulating the learning experience.
The idea that sequenced manipulation can promote creativity in language learning
certainly sounds like a contradiction in terms unless we look at the role of technology in
Skinner’s theory. Skinner (1954) suggested that specially programmed teaching machines
could improve teaching by serving as private tutors. Foreseeing possible criticism against
the mechanical approach to teaching with machines, Skinner (1954) asserted that the
machines were not to replace the teacher, but rather would serve to increase efficiency by
establishing necessary behaviors and setting the stage for the teacher’s work in class.
Machines could save teachers labor and allow them use class time to present the
intellectual, cultural, and emotional aspects of the subject matter to ensure that students
with different types of thinking got introduced to a variety of ways to learn.
Skinner’s idea of allocating the tutoring duties to a computer to free up time for
other activities in language classrooms has remained popular among CALL scholars and
practitioners. Many have emphasized such benefits of computer tutors as tireless practice
(Dodigovic, 2005; N. Ellis, 2001), infinite flexibility (Garrett, 1995), and modular
presentation (Frederiksen et al., 1995). Pusack and Otto (1984) summarized this idea in
their blueprint of a comprehensive computer-assisted instruction in foreign language
curricula:
Outside of class, students use computer-based systems for
grammar rule–learning, vocabulary acquisition, listening and
cultural comprehension (via video and audio), development of
39
reading skills, and writing support. Away from the computer, they
engage in less structured reading, listening, and writing. In class,
the spoken language predominates as teachers work to foster oral
proficiency, literary understanding, and intercultural awareness. (p.
204)
In a broad-ranging exploration of CALL, Levy (1997) proposed a tutor–tool
framework as a schema to organize existing projects in computer-based instruction,
which at the time was a new but already diverse and complex field. Levy (1997) argued
that the computer acts like a tutor when it assumes the teacher’s responsibility in
presenting information, guiding the learner, providing feedback, and assessing the
learner’s performance. As a tool, the computer facilitates learning without imposing a
structure on it. In contrast to previous definitions of the CALL tutor role that linked it to
drills and practice (Warschauer, 1996; Wolff, 1993), Levy (1997) argued that evaluation
of the student by the computer was the main element that “sets the computer tutor apart
from the tool” (p. 180). Further, Levy (1997) highlighted the strengths and weaknesses of
the tutorial role of CALL. Among the benefits, Levy (1997) mentioned the valuable
supplementary work and additional practice, as well as computer’s flexibility to provide
language instruction when the teacher was not available. As for disadvantages, the
structured and isolated nature of tutorial applications produced mechanistic language
tasks. Moreover, the self-contained tutorial had to possess a sophisticated processing
capability to handle learners’ input for proper error correction and evaluation.
Historically, tutorial computer applications symbolized the beginnings of CALL
as a field of practice and research. The first large–scale projects, such as PLATO
(Programmed Logic for Automatic Teaching Operations) and TICCIT (Time-Shared,
Interactive, Computer Controlled Information Television) (for more information see Heift
& Schulze, 2007; Levy, 1997) marked the beginning of CALL as a field. These
applications included features of tutorial CALL, such as grammar and vocabulary drills,
which led Levy and Stockwell (2006) to suggest that “the design and development of
40
CALL tutors has been continuing since the field began” (p. 22). However, the
weaknesses of early CALL tutors, such as mechanistic tasks and lack of natural language
processing, coupled with the increased availability of microcomputers in the 1980s and
the advent of the internet era in the 1990s, compelled CALL practitioners to turn away
from tutorials to other alternatives. As a result, the development of CALL proceeded in
two different directions: communicative CALL applications and parser-based CALL
applications. Although these two lines of practice and research took different routes, the
following discussion shows that eventually they led educators back to considering
traditional learning systems for grammar instruction.
Parser-based CALL
Parser-based CALL applications became the foundation for Intelligent CALL, or
ICALL, prompted by the development of computers on the one side and of theories of
natural language processing (NLP) and computational linguistics on the other. The
benefits of parsers were alluring to language educators because they could allow learners
to create language and have their written and oral production evaluated by the system
rather than merely practice their receptive skills (Heift & Schulze, 2007). As Dodigovic
(2005) describes it, ICALL applications would do “what good human tutors frequently
do: point out obvious linguistic errors and provide individualized feedback, correction
and remediation” (p. 96). Although the application of NLP parsing techniques to CALL
has taken two directions—a focus on grammar versus a focus on communication
(Holland & Kaplan, 1995) —most projects and research focused on grammar (Nerbonne,
2003).
Because parser-based CALL by its nature is connected to the analysis of
erroneous input based on a systematic approach (for the description of various
approaches see Dini & Malnati, 1993, and Heift & Schulze, 2007), many advocated for
41
the wide employment of parsers to form-focused instruction in general and to providing
feedback to learners’ errors in particular (Cook & Fass, 1986; Dodigovic, 2005; Heift &
Schulze, 2007; Holland, Maisano, & Alderks, 1993). The main strands of research on
applications of parsers to form-focused instruction and feedback come from educators
Nagata and Heift. In her research, Nagata (1993, 1995, 1996, 1997) examined the
application of parsing techniques to feedback in experimental language learning
programs on Japanese particles. The results of her experiments favored the intelligent
form of feedback. Similarly to Nagata’s research, Heift’s studies (2001, 2004, 2006; Heift
& Rimrott, 2008) demonstrated that parser-based feedback produced better learning
outcomes compared to other types of feedback.
Practice and research established the advantages of parser-based ICALL
applications, yet at the same time they revealed their shortcomings. For instance, early
parser applications focused mostly on syntax and morphology, thus ignoring other areas
of language learning such as semantics and pragmatics (Cook & Fass, 1986; Holland &
Kaplan, 1995; Holland et al., 1993). Next, successful parser applications generally
demand significant computational expertise in addition to subject–domain knowledge
from their creators (Heift & Toole, 2002), which makes them less practical than
traditional computer-based applications. Further, no computational system to date can
cover all peculiarities of a language system, thus leaving any error–detection mechanism
far from being reliable in identifying and explaining learner errors (Heift & Schulze,
2007). In a similar vein, many researchers pointed out that parser-based applications to
feedback sometimes neglected the pedagogical considerations of the information that the
system provided to the learners (Garrett, 1995; Heift & Schulze, 2007; Holland &
Kaplan, 1995; Holland et al., 1993). Finally, Holland et al. (1993) suggested that ICALL
applications seemed to be beneficial only for a selected type of students—those with
intermediate proficiency, an analytical orientation, tolerance of ambiguity, and
confidence as learners.
42
Overall, despite all of the advantages of parser-based ICALL, its shortcomings
prevented its wide application. Certainly, most problems with parsing complexity will be
minimized in the future with the development of more reliable computational techniques.
At this point, however, traditional tutoring systems seem more reliable because they
provide more certainty and consistency due to the limited nature of responses.
Communicative CALL
Whereas parser-based CALL has been criticized for being more technological
than pedagogical, the parallel development of communicative CALL proceeded in line
with changes in language teaching methods. Thus, the turn to communicative language
teaching in mainstream language learning triggered a similar development in CALL
applications. Although the term communicative CALL may be interpreted differently (see
Bax, 2003; Garrett, 2009), I use this term in a broad sense to describe CALL applications
that allow the use of computer as a tool through employing such computer-mediated
communication (CMC) instruments as e–mail, discussion boards, chats, and video–
conferencing. Just like its mainstream predecessor, communicative CALL is focused on
the acquisition of language through communication rather than conscious learning and,
hence, is less structured and controlled compared to applications in form-focused
instruction.
In recent years, we have seen an explosion of new Web 2.0 technologies (e.g.,
blogs, podcasts, and wikis) and new media for social communication and self–publishing
(e.g., Skype, Facebook, Twitter, MySpace, YouTube). Language educators who support
the communicative language teaching approach generally see a great potential of the
application of the new media to language learning with computers (Felix, 2005; Sykes,
Oskoz & Thorne, 2008; Warschauer, 2005). Yet despite the attractiveness of the new
communication possibilities, some educators have raised concerns about the viability of
43
CMC applications for language learning. Thus, upon reviewing research on language
acquisition in CMC environments, Chapelle (2001) suggested that in evaluating the effect
of CMC applications, the quantity of learner participation was often inappropriately
equated with language acquisition. Similarly, in their discussion of language learning via
CMC, Levy and Stockwell (2006) quoted Healey (1999) who said that “technology alone
does not create language learning any more than dropping a learner in the middle of a
large library does” (p. 191) to support their argument that just providing learners with a
range of communication opportunities does not guarantee that language learning will
occur. Furthermore, Garrett (2009) criticized the hegemony of the communicative
approach both in mainstream SLA and CALL research and claimed that “nowadays a
program heavily privileging the communicative approach disadvantages those who prefer
a cognitive foundation-building approach” (p. 722). In short, similar to the mainstream
language learning that became disillusioned with the acquisition of formal language
through communication only, the field of CALL realized the need for more-structured
activities that provide opportunities for conscious learning.
New need for the tutorial CALL
Turning away from early mechanistic tutorials has led practice and research to
explore the Intelligent CALL through parser-based applications and the communicative
CALL through using the computer as a tool. This created a situation in which tutorials
were de-emphasized in current pedagogical practice (Nutta, 2004). This situation seemed
unfair to some researchers (Garrett, 2009; Hubbard & Siskin, 2004; Nutta, 2004), who
argued that the shortcomings of both intelligent CALL and communicative CALL should
place tutorial CALL “back into the mainstream of the field” (Hubbard & Siskin, 2004, p.
448). Thus, Garrett (2009) blamed SLA theory and language pedagogy for strongly
privileging communicative teaching methods, which created a lack of “innovative drill–
44
and–practice CALL” (p. 722). Further, Nutta (2004) talked specifically about grammar
explanations in CALL, echoing Pusack and Otto’s (1984) call for using computers to
build solid language base outside of the class:
Shifting the systematic study of grammar points from the realm of
the classroom to the domain of the computer laboratory would
enable instructors to take advantage of classroom interpersonal
dynamics and allow them to take into account differences in
background knowledge and learning styles (p. 51).
Whereas Nutta (2004) and Garrett (2009) call for reconceptualizing CALL
practice and rethinking grammar teaching in CALL, Hubbard and Siskin (2004) go a step
further and offer a new vision for the concept of the tutorial. They surveyed four major
CALL conferences to demonstrate that practitioners’ interest in tutorial CALL is still
strong worldwide. Based on this and other considerations, they suggest that tutorial
CALL should be moved “out of the margins of the field of language learning” (p. 449).
Furthermore, Hubbard and Siskin (2004) argue that the definition of the tutorial should be
changed because the tutor/tool distinction proposed by Levy (1997) is not viable
anymore. In contrast to Levy’s (1997) idea of the evaluation to be a key element in
defining the computer as a tutor, Hubbard and Siskin (2004) claim that its “teaching
presence” should be that key element:
While we have argued … that the live teacher is not necessarily
absent when learners are using tutorial software, the software can
still be “teaching”. In fact, one way to view tutorial software is to
see it as an extension through time and space of the teaching
presence of its designer. (p. 457)
45
Hubbard and Siskin (2004) claim that shifting the focus of the definition from
evaluation to the teaching presence would allow us to include more activities, such as
grammatical explanations, for example, into the concept of a tutorial. Thus, the term
tutorial CALL would refer to “to the implementation of computer programs (disk, CD–
ROM, web-based, etc.) that include an identifiable teaching presence specifically for
improving some aspect of language proficiency” (p. 457). This way, one of the most
promising uses for tutorial CALL would be helping learners to develop conscious
knowledge of the language forms, patterns, and rules.
In summary, the development of parsing systems and new communication media
has pushed the previously leading tutorial applications to the margins of the field. Years
later, however, the lack of expected results coupled with the shortcomings in both the
pedagogical and technological bases have pointed back to the advantages of grammar
tutorials for CALL research and practice. From the perspective of parser-based CALL,
grammar has always been instrumental for language learning. In the series of studies by
both Nagata and Heift, parser-based feedback was comprised of highly explicit
metalinguistic explanations. Thus, by revealing the advantages of parsed feedback, this
line of research provided additional support for explicit deductive grammar explanations
in computer tutorials. Communicative CALL, in contrast, never put emphasis on explicit
grammar learning. Here, a lack of balance in the results of language acquisition through
CMC brought the need for conscious grammar instruction back to attention of educators.
Computer-based grammar tutorials and teachers
Earlier I presented some evidence that students want grammar to be taught
because they believe in conscious learning of the language. In the previous section, I
presented evidence that computers by their nature are well suited to present grammar to
46
students. Here, I offer a discussion about the benefits of computer-based grammar
tutorials for the teachers.
In his book on CALL, Levy (1997) mentioned the rivalry in the relationship
between the teacher and the computer when he wrote that “in the case of the computer as
tutor, the teacher’s fear of being displaced by the computer arises” (p. 207). Levy (1997)
proceeded to describe that such a situation is highly unlikely given present computer
capabilities in handling language. He further suggested that computers could be
beneficial for the teachers because they can free them from some of their responsibilities.
As mentioned before, some educators have suggested that grammar tutorials can be of
value to teachers by allowing more time for communicative activities in the classroom
(Gilby, 1996; Nutta, 2004; Pusack & Otto, 1984; Skinner, 1957). Here, however, I focus
on two other aspects of why tutorials might be welcomed by language teachers: first, the
emotional attitudes the teachers exhibit towards grammar; and second, their expertise and
readiness to teach grammar.
Teacher’s attitudes to grammar instruction
Larsen-Freeman (2002) wrote that many teachers consider grammar to be a
“linguistic straitjacket” (p. 103). Similarly, Pennington (2002) asserted that “required
modules on grammar are often the most hated and feared among language teachers and
prospective teachers” (p. 78). These statements describe the emotional reactions some
teachers have toward grammar instruction. While these reactions could be based on
personal experience, it is also reasonable to assume such negative attitudes could come
from the decades-long domination of communicative approaches that left grammar
instruction on the periphery of language teaching practice and research.
Although some teachers refuse to give grammar a place in their teaching practice
based on emotional reactions and their pedagogical beliefs, other teachers do not like
47
teaching grammar because they feel it is not accepted by the mainstream of the language
teaching field. In her study on expertise in teaching, Tsui (2003) worked with four
teachers closely to examine their teaching beliefs and practice. One of the teachers,
Genie, described her teaching as “different” when she admitted teaching grammar rules.
Tsui (2003) explained that Genie felt that teaching grammar rules was not an accepted
practice because teachers should “let students practice using the language and not talk
about the rules” (p. 218). Similar to Tsui’s observation, Cajkler and Addelman (2000)
mentioned that teachers “sometimes found it necessary to teach grammar almost secretly”
(p. 109) because they were not sure how it fit within communicative approaches. Thus,
even teachers who believe in the necessity of grammar instruction do not always like
teaching it because they fear being accused of using outdated teaching methods.
In the same way as some teachers do not like grammar instruction because they
think it is not accepted practice, other teachers avoid it because they do not believe that
their students want to learn grammar. For example, in his study on anxiety in language
classrooms Levine (2003) found a discrepancy when the responses to the study
questionnaire made it obvious that instructors perceived higher levels of anxiety among
students than students themselves. This discrepancy was particularly evident in the area
of grammar. Furthermore, in well–known studies on student and teacher beliefs about
grammar, Schulz (1996, 2001) found striking differences when considerably more
students than teachers agreed that grammar is essential to language learning, admitted
liking it and indicated the need for more corrections in the classroom. In the discussion of
her study results, Schulz (2001) suggested that “any sizeable discrepancy in teacher and
student perceptions regarding the efficacy of instructional practices can be detrimental to
learning” (p. 256).
In general, it seems reasonable to state that teachers who do not like teaching
grammar, either because of their personal experiences or their doubts in its necessity,
might eagerly accept the relocation of the grammar instruction to computer-based
48
tutorials, if required by the changes in mainstream pedagogy or prompted by calls from
their students for more grammar in a course.
Teachers’ expertise in grammar instruction
Whereas dislike or hatred towards grammar can be explained by personal
experiences or beliefs ingrained by mainstream teaching philosophy, fear of grammar,
mentioned by Pennington (2002), could have a different source. Previously in this
chapter, I discussed a recent trend when voices from research and practice have emerged
in favor of stepping away from communicative teaching approaches towards more
grammar-focused instruction. Despite the current lack of some organizing force behind
this development, a change in mainstream teaching philosophy toward more formfocused instruction seems to be inevitable. A new emphasis on grammar in language
instruction will certainly alter what is expected from language teachers. An ability to
explain grammar to their students might soon become demanded by the profession and
some teachers may fear that they are unprepared for such a change.
Many contemporary teachers in English-speaking countries were educated during
the period described as “the death of grammar teaching” (Hudson & Walmsley, 2005, p.
593) or “the demise of grammar-based instruction” (Richards, 2002, p. 35). In light of
such strong metaphors, it would be unreasonable to assume that the teacher education of
that period emphasized teacher preparation in grammar instruction. In fact, several
educators noted the lack of adequate grammar knowledge and gaps in understanding of
grammar among language teachers (Andrews, 2007; Borg, 2006; Hudson & Walmsley,
2005; Katz & Blyth, 2009). To illustrate it with an example, I refer to Larsen-Freeman’s
(2003) book on grammar teaching where she presented the opinions of real teachers on
grammar. Here, an English teacher from a U.S. high school offered the following
anecdote on her experience with the rules:
49
Once a student asked me, “Why can’t I write ‘more easy’?” My
response was that with certain adjectives you add “-er” instead of
“more.” She understood this, but she wanted to know why. I had to
tell her that I didn’t know why. I searched my grammar books and
discovered that any adjectives with one syllable and any adjectives
with two syllables, one being a _____y, take the “-er” or “-ier”
ending. All other adjectives with two or more syllables take
“more.” I explained it to the rest of the class. They were amazed. I
could see the lightbulbs going on in their heads. (p. 50)
Larsen-Freeman (2003) offered this anecdote to describe how presentation of the
rules could lead to “the dawning of awareness” (p. 50) in students. The same anecdote,
however, could serve as evidence that this English teacher had a considerable lack of
subject–matter knowledge if she had to look up a common rule of English grammar.
Certainly, one teacher’s story cannot serve as the foundation for a general statement
about the teachers’ lack of knowledge of grammar. But taken together with the study
observations (for reviews of the studies see Andrews, 2007 and Borg, 2006), we can
conclude that some teachers lack the necessary expertise in grammar teaching.
In the discussion above, the expertise in grammar teaching mostly equals the
metalinguistic knowledge of the rules (Borg, 2006). But the return of grammar instruction
requires a multifaceted knowledge, rather than a straightforward recital of grammar rules.
Many authors who plead for the revival of grammar instruction proposed to reconsider
the definitions of grammar expertise. While some educators mentioned that modern
teachers should be familiar with a variety of grammar teaching approaches and possess
more explicit knowledge (Hinkel & Fotos, 2002; Hudson & Walmsley, 2005), others
provided an extensive list of elements that should comprise teachers’ knowledge of
grammar. For example, Leech (1994, p. 18) gives a five-point list of what a teacher of
languages should know to effectively teach grammar, and Andrews (1994, p. 75)
provides a 12-point list. These lists require not only a thorough understanding of the
grammar system, but also pedagogical readiness to anticipate learners’ difficulties and
50
errors and vary the depth and amount of explanation according to the situation. More
recent publications extend these and other requirements to teachers’ grammar expertise to
include the understanding of the differences between the written and spoken grammar
(McCarthy & Carter, 2002), the discourse dimension of grammar (Celce-Murcia, 2002),
the concepts of register and sociolinguistic variation (Katz & Blyth, 2009), and the
teaching grammatical features based on semantic categories and communicative agency
(Negueruela, 2009), among others. Thus, renewed interest in grammar instruction comes
with an overwhelming number of new demands for teachers’ expertise in grammar.
Those teachers, who feel the lack of professional training to respond to these
requirements, might welcome a possibility to relocate these new duties to computer-based
grammar tutorials, if only for a while until they reach the necessary expertise levels.
Overall, I presented this discussion to support the idea that computer-based
grammar tutorials would be a welcome tool for some teachers. I demonstrated why
teachers who do not like teaching grammar or who feel a lack of expertise to teach it may
find it beneficial that computer-based tutorials take over the presentation of grammar to
students. Certainly, there is no simple solution in just transferring the responsibilities of a
teacher onto a computer; and not any computer-based tutorial will be a panacea. There
are many challenges in transforming all that is known about such complex phenomenon
as the grammar of a language into a pedagogically sound presentation. However, creating
a computer-based tutorial by a group of experts for wide dissemination could be a more
viable and faster solution than reeducating thousands of language teachers.
Computer-based grammar tutorials and students
Teachers might like the option of computer-based grammar tutorials because it
could save them time on grammar explanations in class or because it could help them
avoid teaching it for various different reasons. For students, however, a computer-based
51
grammar tutorial could be beneficial for one main reason—a computer can cater to their
individual differences better than a teacher.
Individual differences theory implies that “learners learn in different ways and
that no single methodology is useful for all” (Oxford, 1995, p. 365). The ability to cater
to different individual learning styles has often been mentioned as one of the advantages
of CALL (Ahmad, Corbett, Rogers, & Sussex, 1985; Dodigovic, 2005; Gilby, 1996; Heift
& Schulze, 2007; Oxford, 1995). The computer allows for any information to be
presented in various forms and be available for a lengthy period of time. For language
learning, which in academic settings usually happens in a classroom with a group of
students during the time assigned for the course, such affordances provided by a
computer can offer a way to enrich the learning experience of individual learners.
Although learning with technology does not come close to overtaking classroom
learning in students’ preferences, the students usually find it engaging and effective
(Ayres, 2002; Gilby, 1996; Nutta, 2004; Taylor, 1979). Computer-based learning offers
an advantage that is hard to replicate in the traditional classrooms, which is the
opportunity to review or rewatch the material. In explaining why e–learning might be a
promising alternative to classroom learning in general, Zhang, Zhao, Zhou, and
Nunamaker (2004) and Gilby (1996) suggested that in class students tend to avoid asking
questions or asking for repetitions for various reasons. The authors argued that computerbased materials could let students review problematic topics several times until they were
fully understood, thus relieving teachers of such repetitive work.
A different perspective on the benefits of grammar instruction through CALL
comes from Tse’s (2000) study on learners’ perceptions of foreign language study, in
which she found that students often feel frustrated when the teacher explains grammar
too quickly or when explanations are oriented towards the top students. In this case,
computer-based grammar tutorials could be advantageous for students who felt left
52
behind in class. Individual work with grammar tutorials at one’s own pace could restore
students’ self–esteem and keep them on par with the rest of the class.
Finally, grammar seems to fit the independent study the most. For example, in his
comparison of tasks suitable for independent study, Lee (1996) found that learners
preferred using textbooks to study grammar independently. In another study, Jarvis and
Szymczyk (2010) investigated the learner perspective on using web- and book-based
materials for grammar learning. They found that most students reported spending time
studying grammar outside of class. Similar to Lee (1996), this study also found that
students preferred books to study grammar. Nevertheless, these two studies demonstrated
that learners were willing to work on grammar outside of class. As to why students
preferred textbooks, both studies offered the same reason, namely that the textbook
provided more systematic grammar explanations.
In summary, a computer-based grammar tutorial can be an effective and
appreciated learning tool for the learners because it can accommodate their individual
learning differences. These differences can stem from various situations when someone
learns better independently, rather than in a group, and someone else needs a slower pace
or multiple repetitions to master the content. In any situation, it is reasonable to assume
that students who found themselves in need of more grammar explanations would
consider computer-based grammar tutorials at least as one of the alternatives.
Summary
This section demonstrated that computer-based tutorials are a sound choice for
teaching grammar for several reasons. First, the history of CALL projects and the
structured nature of computer-based applications fit well with presenting and evaluating
grammar, which is traditionally regarded as a system of language rules. Second, the
relocation of grammar instruction to computers could be welcomed by the teachers, who
53
would be relieved of dealing with an area of expertise that might be neither their strength
nor their preference. Finally, computer-based applications are beneficial for learners
because they acknowledge individual learning styles and afford opportunities to review
subject matter at one’s own convenience.
Tutorials and instructional design
A grammar tutorial, like any other instructional presentation, can be an efficient
and reliable learning tool if its content is presented in a comprehensible and systematic
way. But the solid content of a tutorial is only one step to its success. The design of an
instructional presentation is another powerful factor in assuring its quality because
“preattentive organization is a powerful determinant of what is actually understood in the
perceived message” (Winn, 1992, p. 56). Thus, the discussion in this section is an attempt
to answer the question: What are the guidelines for assuring high quality of a computerbased instructional tutorial? To answer this question, I start by reviewing the theoretical
underpinnings for the design considerations of a good instructional tutorial. Then I
present a number of design principles that became the guiding factors in creating the
tutorials for this study. Finally, I discuss whether enhancing a basic tutorial with
animations or a human teacher could have a beneficial effect on its overall quality and
learner satisfaction.
Cognitive theory of multimedia learning
To assure the quality of the computer-based instruments in this study, I based the
design considerations for the study tutorials on an established theory of instructional
design for learning with multimedia. Mayer’s cognitive theory of multimedia learning
seemed to suit this purpose the best by offering a comprehensive theoretical foundation
54
with applicable design guidelines. The following passage is a comprehensive summary of
Mayer’s theory and the theories it draws on as published in Mayer and Moreno (2002a):
From dual coding theory [emphasis added] we take the idea that
visual and verbal materials are processed in different processing
systems (Clark & Paivio, 1991; Paivio, 1986). The visual channel
takes input initially from the eyes and ultimately produces pictorial
representations; the verbal channel takes input initially from the
ears and ultimately produces verbal representations. From
cognitive load theory [emphasis added] we take the idea that the
processing capacities of visual and verbal working memories (or
information-processing channels) are severely limited (Baddeley,
1992; Chandler & Sweller, 1991; Sweller, 1999). In short,
presenting too many elements to be processed in visual or verbal
working [memory] (i.e., too many words or too complex a picture)
can lead to overload in which some of the elements are not
processed. From constructivist learning theory [emphasis added]
we take the idea that meaningful learning occurs when learners
actively select relevant information, organize it into coherent
representations, and integrate it with other knowledge (Mayer,
1996; Mayer, 1999; Wittrock, 1990). In short, cognitive
construction depends on the cognitive processing of the learner
during learning. (Mayer & Moreno, 2002a, pp. 110–111)
As follows from this summary, Mayer’s theory is based on three main
assumptions: (a) the verbal and visual information is processed in two different channels;
(b) instructional design should avoid creating a cognitive overload; and (c) learning is a
process of active construction of mental representations of knowledge.
Verbal and visual processing channels
The first assumption of separate channels for processing visual and verbal
information comes from Paivio’s theory of dual coding. In Mental Representations: A
Dual Coding Approach, Paivio (1986) described his view on how humans represent
information mentally. He proposed a theory of mental representations that became known
as the dual–coding approach. This approach was a response to other theories of mental
55
representations that emphasized their verbal origins. Paivio (1986) argued for the
importance of nonverbal processes in building mental representations. According to
Paivio’s ideas, two separate subsystems operate in cognition, “one specialized for the
representation and processing of information concerning nonverbal objects and events,
the other specialized for dealing with language” (p. 53). In Paivio’s theory (1986), these
two subsystems are functionally independent but interconnected during the processing of
information. In short, Paivio’s theory is an ontological attempt to describe how mental
representations occur in the mind based on both verbal and non-verbal stimuli.
Although Mayer’s cognitive theory of multimedia learning builds on Paivio’s
dual–coding theory, it should be seen as its extension to a more practical level, rather
than a direct application (Mayer & Sims, 1994). The main principle of the cognitive
theory of multimedia learning that follows from Paivio’s dual–coding theory is that
learning is better supported by words and pictures than by words alone or by pictures
alone, and that multimedia presentations can aid learners in building visual and verbal
representations as well as in making connections between them (Mayer, 1999, 2005a).
Figure 2 illustrates Mayer’s theory, according to which incoming stimuli in the
form of words and images are first selected from the input to be processed in separate
channels in the learner’s mind. The initially selected verbal and visual stimuli are held in
the two types of corresponding working memories to be later organized in separate
mental models. At the final stage of processing, both verbal and visual model are
connected to each other and integrated with the previous knowledge to create a mental
representation of the knowledge item. Mayer (2005a) hypothesized that designing
multimedia instruction this way takes full advantage of the full capacity of information
processing capabilities of the learner.
56
Figure
F
2. Ma
ayer’s cogniitive theory of multimeedia learningg
Source: Mayeer, R. E. (199
99). Multim
media aids to problem sollving transfeer. Internatioonal
Journal of Educationa
al Research,, 31, 611–6223.
Cognitive
C
loa
ad theory
Cogniitive load theeory is “an in
nstructional theory based on our knoowledge of
hu
uman cognittive architecture that speecifically adddresses the llimitations of working
memory”
m
(Sw
weller, 2005,, p. 28). Thiss theory provvides guideliines for alignning the
sttructure of presented info
formation wiith the purpoose of relieviing the cognnitive load
(W
Wouters, Paaas, & van Merrienboer,
M
2008).
2
The main
m assumpttion of cogn
nitive load thheory is that our workingg memory is
liimited both in
i terms of elements
e
it caan process aat one time annd in terms of the duratiion
of processing
g of novel infformation. Sweller
S
(20055) describedd the operatioon of the
memory
m
systeem, in which
h long–term memory hellps eliminatee some of the working
memory
m
limittations by sto
oring familiaar organizedd informationn and servinng as a centraal
ex
xecutive. Th
he informatio
on in long–teerm memoryy is stored inn schemas, w
which are
57
“cognitive constructs that allow multiple elements of information to be categorized as a
single element” (Sweller, 2005, p. 21). The organization of information in schemas is
what reduces the burden on working memory (Kalyuga et al., 1999).
The key role of long–term memory for schema storage is also important for the
process of learning, which Sweller (2005) defines as “an alteration in the long–term
memory” (p. 20). Learning, or understanding the novel information, creates new schemas
or causes changes in the form of the construction of existing schemas. Once schemas are
acquired and reinforced by substantial practice, activity connected to the schemas can be
processed automatically without conscious control (Sweller, 2005). Although changes in
the schemas are likely to occur with any type of instruction, Sweller (2005) argued for
direct instructional guidance, rather than inquiry-based learning. He suggested that it was
not necessary for schemas to be generated entirely by an individual; instead schemas
from others could be used to organize information efficiently. Sweller (2005) suggested
that inquiry-based learning resulted in a slow and ineffective knowledge acquisition due
to the “difficult, time-consuming process of almost limitless random generation followed
by testing” (p. 26). Nevertheless, for direct instruction to be optimal it has to aim for
accumulation of knowledge through efficient schema construction.
In short, cognitive load theory is part of the theoretical foundation of the cognitive
theory of multimedia learning because it is concerned with optimization of instructional
design based on knowledge of how memory operates. This theory favors direct
instructional intervention into the learning process with the goal of efficiently organizing
the knowledge in our mind. Thus, this premise of cognitive load theory seems congruent
with the idea of the explicit deductive instruction that my study supports.
58
Constructivist learning theory
Constructivist learning theory is another idea that contributed to Mayer’s
cognitive theory of multimedia learning. Mayer (1999) argued that the goal of his
cognitive theory of multimedia learning is to promote a constructivist type of learning,
which he defined as learning that “occurs when learners seek to make sense of the
presented material by constructing a coherent mental representation” (p. 612).
Constructivist learning is often associated with collaborative learning; Mayer’s position,
however, leans away from social constructivism, which often presupposes active
collaboration to promote learning, toward a purely cognitive constructivism, which
suggests that constructivist learning occurs whenever learners engage in cognitive
processing of information (Mayer & Moreno, 2002a). Thus, one issue of interest in
Mayer’s research is whether constructivist learning can be primed by passive media,
which includes any type of instructional presentation and does not involve collaborative
activity. Thus, as another theoretical foundation, the idea of constructivist learning adds
to the cognitive orientation of Mayer’s theory of multimedia learning. In contrast to more
common views on constructivism as a theory of social collaboration, Mayer underscores
the role of instructional presentations in promoting the construction of knowledge and,
hence, learning.
In short, I chose Mayer’s cognitive theory of multimedia learning as the main
framework for decisions on instructional design because it draws on variety of theories to
elaborate a set of applicable guidelines. The next section reviews the guidelines that
became the guiding principles for creating the tutorials in this study.
59
Applicable design considerations
This section refers to the Mayer’s theory and other related theories to present
what design considerations the tutorials in this study are built upon. One of the main
considerations for achieving the balanced design of instructional materials is to prevent
cognitive overload (Mayer, 1999; Mayer & Moreno, 2002a; Sweller, 2005; Wouters et
al., 2008). Mayer and his colleagues carried out various studies within the cognitive
theory of multimedia learning to elaborate a set of principles and guidelines to minimize
the risks of potentially overloading learners’ cognitive systems.
Although good instructional design should aim at reducing cognitive load, the
cognitive load itself is not always negative. In fact, there are three categories of cognitive
load: extraneous, intrinsic, and germane (Sweller, 2005). The extraneous cognitive load is
caused by the manner in which information is presented. This type of cognitive load is
the one the good instructional design should set out to minimize. The increased
extraneous load is caused by ignoring the working memory capacity and failing to focus
the memory resources on schema construction. The intrinsic cognitive load is caused by
the natural complexity of processed information, such as a number of elements and their
interconnectedness. This type of load can be regarded “as a necessary base load”
(Wouters et al., 2008, p. 648) because it is inherent in the information structure itself. The
intrinsic cognitive load can also be a target of instructional design but to a lesser extent
than the extraneous cognitive load. Finally, the germane cognitive load is caused by
effortful learning, resulting in schema construction and automation. This type of
cognitive load is “effective” (Wouters et al., 2008) because only active processing of
information on the part of the learner yields germane cognitive load. This cognitive load
cannot be manipulated directly by the instructional design. But because the three
cognitive loads are additive (Sweller, 2005), the germane cognitive load can be increased
by reducing the extraneous and intrinsic loads.
60
Thus, the purpose of good instructional design is to promote active processing by
increasing the germane cognitive load that leads to schema construction. This purpose
can be achieved by reducing the negative effects of the extraneous and intrinsic cognitive
loads. Below, I present an overview of design considerations that serve this purpose. The
design considerations are broadly divided into five major categories: layout, modality,
relevance, cueing, and use of social cues.
Modality
The assumptions for design considerations of modality in the instructional
presentation are two–fold: first, a multimodal presentation is better than a unimodal
presentation; and, second, the same information should be presented in different modes.
The first assumption about the superiority of the multimodal presentation over a
unimodal one is based on two principles of cognitive theory of multimedia learning, the
multimedia principle and the modality principle. The multimedia principle, which is
supported by studies within the cognitive theory of multimedia learning, suggests that
learners understand explanations better when the instructional message is presented with
verbal and corresponding visual information rather via each mode alone (Fletcher &
Tobias, 2005; Mayer, 1999; Mayer, 2005a; Mayer & Moreno, 2002a, 2002b). Further, the
modality principle states that presenting some information in different modes (visual vs.
auditory) can expand working memory capacity by spreading the processing load of
working memory between the visual and the auditory elements (Ginns, 2005; Low &
Sweller, 2005; Wouters et al., 2008).
The second design assumption with regard to modality follows the premises of the
multimedia and modality principles in suggesting that an instructional presentation
should present information in different modes to lower the cognitive load, but adds the
idea that the same information should be presented in different modes. This assumption is
61
based on the principles of split attention and contiguity. The split–attention principle
states that instructional design should avoid formats that require that learners split their
attention between multiple sources of information (Ayres & Sweller, 2005; Kalyuga,
Chandler & Sweller, 1999; Mayer, 1999). For instance, when the same idea is presented
by a printed text and a graphic, learners have to split their attention between two visual
elements to understand the information. In this case, the integration process overburdens
the limited working memory capacity and inhibits learning (Kalyuga et al., 1999).
According to the split–attention principle, presenting information in different modes (e.g.,
by replacing text with an auditory narration) imposes less cognitive load on memory. To
enable the more comfortable perception, however, the contiguity principle also needs to
be considered. This principle states that two different modes should present the same
information simultaneously, rather than separately (Mayer, 1999; Mayer & Anderson,
1992; Mayer & Moreno, 1998, 2002b; Mayer & Sims, 1994; Tabbers, Martens, &
Merrienboer, 2001).
In practical terms, it means that less extraneous cognitive load is imposed on
memory when the information is presented in different modes that happen simultaneously
(e.g., when narration in the auditory mode accompanies the corresponding visual
representation in the form or a diagram or a picture).
Relevance
The next design consideration that aims at lowering both intrinsic and extrinsic
cognitive load is that all elements in content and manner of presentation should be
relevant. This design consideration follows from two principles in Mayer’s theory, the
coherence principle and the redundancy principle. The coherence principle states that
extraneous information should be excluded rather than included (Mayer, 1999, 2005b;
Mayer & Moreno, 1998, 2002a, 2002b). Extraneous information (e.g., interesting yet
62
irrelevant sentences, sounds, or pictures) can distract learners from deep processing of the
subject matter and decrease resources necessary for creating new schemas (Kalyuga et
al., 1999). If applied to the content of the presentation, this principle suggests that a
concise summary that highlights relevant elements is superior to a longer version of the
same content (Mayer & Moreno, 1998).
The redundancy principle is similar to the coherence principle in that it suggests
that including additional information to the presentation may be redundant (Mayer &
Moreno, 2002b). Here, however, the additional information may be redundant only in
terms of the manner of presentation, rather than being extraneous to the content. An
example would be a presentation in which a graphic is described by both an oral narration
and a written text. Although it may seem like a good idea because learners can use the
information source of their choice, research demonstrates that such redundancy inhibits
learning because “a duplication of information using different modes of presentation
increases the risk of overloading working memory capacity” (Kalyuga et al., 1999, p.
356).
The arguments presented above seem to warrant the conclusion that the content of
an instructional presentation should focus on the relevant information pertinent to the
subject matter. Furthermore, if multiple sources of media are used, they should
supplement rather than duplicate each other.
Cueing
Cueing is a process of attracting learners’ attention to some information in the
content by highlighting, changing color, pointers (arrows or exclamation marks), sounds,
or animation (Wouters et al., 2008). In Mayer’s theory of cognitive multimedia learning,
the idea of cueing is consistent with the principle of signaling. This principle states that
“people learn more deeply from a multimedia presentation when cues are added that
63
highlight the organization of the essential material (Mayer, 2005b, p. 183). Thus, the
theory suggests that signaling important information with cues (color, bolding,
highlighting, number, etc.) helps lower cognitive overload, in particular extraneous
cognitive load (Mayer, 2005b; Wouters et al., 2008). Mayer (2005b) reviewed research
on signaling carried out within the framework of the cognitive theory of multimedia
learning and concluded that the experimental support of this principle is promising
because the studies he reviewed (e.g., Harp & Mayer, 1998; Mautone & Mayer, 2001)
found experimental support in favor of better learning in the cued conditions. Within the
cognitive load theory, Kalyuga et al. (1999) investigated the technique of color-coding in
instructional materials and found that color-enhanced presentations produced better
learning outcomes than materials without color coding. Thus, the available research on
cueing suggests that it can decrease negative split–attention effects by reducing the search
time for relevant information.
Cueing increases the effectiveness of multimedia by drawing learners’ attention to
the most important information in the content and, hence, reducing the mental effort
spent on processing (Roberts, 2009). However, Wouters et al. (2008) suggested that for
cueing to be beneficial it should not be used in a random fashion or in excessive
quantities because that could potentially have a negative effect of the learning process.
More guidelines come from the review of the research on color use done by Schwier and
Misanchuk (1995), who offer specific advice on successful color combinations. They also
suggest that materials presented in color are generally processed faster than the same
material in black and white, and that the basis for efficient color coding is based on its
consistently throughout the instructional presentation. Further, Winn (1992) provided a
thorough discussion of techniques to improve perceptual characteristics of an
instructional presentation. In this discussion, Winn (1992) mentioned that while both
style and color could be good cueing tools, just one variation of one factor was sufficient
to guide learners’ attention.
64
The main practical conclusion that can be drawn from the cueing research is that
highlighting important information in some way helps to guide learners’ attention
because it decreases both extraneous and intrinsic cognitive load and, therefore, helps
maximize learning. The application of cueing techniques, however, should be consistent
and moderate to be effective.
Layout
The suggestions for good layout of information in instructional presentation come
from perception principles and from the segmentation of the presentation. Thus, among
the perception principles that aid to promote better perception, several were adopted for
the production of the tutorials from Winn (1992), Schwier and Misanchuk (1995), and
Keller and Burkman (1992): Figure–ground distinctions should be as clear as possible;
the sequence of attention flow is determined by the sequence in which information is
presented; less text should be displayed on a computer screen than on a typical page;
headings should be used to mark the segments of the content; having dark font on white
background is the optimal choice for text; and, finally, colors used in a program should
be compatible.
Another layout consideration is that of segmentation, which refers to both the
content and the manner of presentation. Mayer (1999) suggested the chunking principle,
which states that visual and verbal information should be presented in short rather than in
long segments of a presentation. Similarly, with regard to content, Mayer (2005c)
proposed a segmentation principle—that dividing an instructional presentation into
segments, each addressing part of the instructional message, proved more beneficial than
a continuous presentation. The segmentation of instructional information into chunks and
sequencing the chunks based on the difficulty of material and the relationships between
the concepts has been often suggested as a powerful technique of making the text appear
65
easier to understand and, hence, making learning easier (Keller & Burkman, 1992;
Morrison, Ross, & Kemp, 2007; Winn, 1992).
In short, design considerations for the overall layout suggest that an instructional
presentation should be presented in a segmented manner to make both perception and
understanding easier. The layout should also use spacious areas with clear contrastive
elements to present the content.
Use of social cues
Another practical suggestion for enhancing the instructional presentation came
from Mayer (2005d), who argued that the use of social cues could prime social responses
in learners and, hence, lead to deeper cognitive processing. One of the principles of
design based on social cues is the personalization principle (Mayer, 2005d). According to
this principle, “people learn more deeply when the words in a multimedia presentation
are in conversational style rather than formal style” (Mayer, 2005d, p. 201). The main
assumption of this principle is that once learners’ interest in the presentation is increased
by using the self as a reference point, it would encourage learners to process the
information more deeply. Moreno and Mayer (2000) and Mayer, Fennell, Farmer and
Campbell (2004) investigated experimentally whether using a conversational style
instead of a formal style in multimedia presentations could be considered an efficient
technique to increase learner interest and learning. In both experiments, the researchers
compared personalized narration with a more formal narration, and they concluded that
personalized narration encouraged deeper processing and engagement with the material,
which resulted in better outcomes on some types of tasks.
Another principle of priming learning by using social cues is the voice principle,
which states that “people learn more deeply when the words in a multimedia message are
spoken in a standard-accented human voice rather than in a machine voice or foreign-
66
accented human voice” (Mayer, 2005d, p. 201). By suggesting this principle, Mayer
assumed that a human voice would convey to the learners that someone was speaking
directly to them. Mayer (2005d) reviewed several studies that consistently favored the
application of the voice principle to instructional messages. The success of these studies
in proving that human voices were superior in perceptual quality to the machine voices,
are in line with other research on the effect of human voice. Although we are surrounded
by various sounds all the time, our senses attend to them preferentially, and humans
attend to vocal sounds because they directly reflect communicative intent (Schirmer,
Simpson, & Escoffier, 2007). Speech is also is an important carrier of identity because
specific features of our voices are “ingrained” in the acoustic parameters of our voices
(Sokhi, Hunter, Wilkinson, & Woodruff, 2005, p. 577).
In short, an instructional presentation should attempt to influence social responses
in the learner by using a conversational style, rather than a scientific one, and humanvoiced narrations, rather than a machine voice. These techniques of enhancing social cues
create a feeling of a more personable conversation and encourage deeper processing in
learners.
To summarize, the success of any computer-based tutorial depends not only on
the quality of the content, but also on the efficient design of the instructional presentation.
When designing a presentation, a set of factors should be considered that aim at reducing
the cognitive load caused by the manner of presentation. Thus, the layout of the
presentation should be clear-cut in terms of both content and appearance. A presentation
should include a variety of media to help mode-specific processing of visual and auditory
information. The selection of appropriate modes should be based on the considerations of
supplementing or enhancing information, rather than duplicating it. For the main content
of the presentation to be easier to follow, all extraneous information should be avoided
and important information should be highlighted in some way. Furthermore, the narration
should be presented in a conversational manner and narrated by a human voice to create a
67
feeling of social conversation in the learners. The consideration of at least these factors
could help create an efficient presentation in terms of design to help learners to
concentrate on the content.
Ways of enhancing an instructional presentation
In the previous section, I presented guidelines for creating an instructional
presentation of good quality. I assume that if both the content of a presentation and its
design follow some reliable guidelines that assure their quality, the final product will be
an efficient computer-based grammar tutorial. But sometimes just good is not enough, so
now the question is whether a computer-based grammar tutorial can be enhanced in some
way. There are two different possible answers that this study takes under consideration:
(a) enhancing a tutorial by adding animations; and (b) adding a human teacher persona.
This section examines the current research on these types of enhancements and their
effect on learning.
Adding animations
Illustrations in the form of pictures, diagrams, graphics, or photos have been used
to accompany written text for centuries. The capabilities of modern technologies and the
development of software that simplified the production of animations added them to the
types of illustration available in CALL. There are different definitions of animations. For
instance, Betrancourt and Tversky (2000) offered the following definition: “computer
animation refers to any application which generates a series of frames, so that each frame
appears as an alternation of the previous one, and where the sequence of frames is
determined either by the designer or the user” (p. 313). This definition seems too broad
because it suggests that any controlled series of frames could be called an animation.
Therefore, for my study, I adopted a definition that describes the animation as “a
68
simulated motion picture depicting movement of drawn (or simulated) objects” (Mayer
and Moreno, 2002b, p. 88). This definition is more appealing because it highlights the
dynamic nature of the animations; and representation of dynamic movement has often
been mentioned as one of the main characteristics of animations (Bentracourt, 2005;
Lowe, 2003, 2004; Mayer & Moreno, 2002b; Schnotz, 2002; Wouters et al., 2008).
According to Lowe (2003, 2004), this dynamic characteristic can be of three types: (a)
form change or transformation that involves alterations in size, shape, color, or texture;
(b) position change or transition that involves the movement of elements within the
display; and (c) inclusion change or transition that involves the appearance or
disappearance of elements.
In general, animations are considered beneficial for learning because they support
the visualization of dynamic processes and, thus, provide external support for mental
simulations, thus making cognitive processing easier (Bentracourt, 2005; Lewalter, 2003;
Park & Hopkins, 1993; Rieber, 1990; Schnotz, 2002; Wouters et al., 2008). Hegarty
(2004) defined an external visualization as “an artifact printed on paper or shown on a
computer monitor that can be viewed by an individual” and an internal visualization as “a
representation in the mind of an individual” (p. 1). Based on his research on the relation
between external and internal visualization, Hegarty (2004) concluded that exposing
learners to external visualizations enhanced their internal visualization abilities and
decreased the load on their working memory because learners were relieved from
maintaining complex internal visualizations during problem solving. In other words,
providing an external visualization has a beneficial effect on learning. Although in
Hegarty’s definitions an external visualization included any type of visual display,
whether static or dynamic, it is possible to extend his finding to suggest that more
powerful external visualizations could have more beneficial effects on learning.
Animations have traditionally been considered more powerful external visualizations
compared to static displays because they illustrate phenomena not easily observable, such
69
as change or transformation of physical or imaginary characteristics (Bentracourt, 2005;
Schnotz, 2002; Wouters et al., 2008).
Although there seems to be strong support for the beneficial effects of animations
on learning, experimental studies vary in their results when comparing the effects on
learning from dynamic and static visuals. Thus, in a series of experiments with
meteorological animations, Lowe (1999, 2003, 2004) found that animations did not
promote deep processing and that learners tended to be guided by perceptual rather than
relevant information. Schnotz (2002) examined the difference in effects from animated
and static pictures on the subject of time zones and circumnavigation, and he found that
the beneficial effect of animations depended on the level of the learners’ prior
knowledge. Rieber, Tseng, Tribble, and Chu (1996) found that feedback with animations
produced significantly higher learning outcomes than traditional verbal feedback. The
studies on animations in grammar either found no effect of animations on learning
outcomes (Caplan, 2002) or if a positive effect was observed it depended on specific
conditions (Roche & Scheller, 2008). In other words, at this point no definite conclusions
with regard to the role of animations can be drawn and more research is needed.
The disconnect between theory, which suggests that animations should be more
effective than the static visuals, and experimental studies, which fail to consistently
demonstrate such an effect, prompted educators to look for problems with animations.
They identified several problematic issues with regard to the effect of animations on
learning. First, it is not clear whether animations decrease or increase cognitive load. On
the one hand, animations are considered to ease cognitive load because learners are
presented with the ready-made concepts, thus making important information “directly
perceivable by learners” (Betrancourt, 2005, p. 289) instead of requiring them to infer it
from static graphics. In a similar vein, Lewalter (2003) found that learners needed to
work with static displays more intensively than with dynamic displays to support
comprehension. In contrast, animations seem to impose a higher cognitive load compared
70
to static graphics, specifically due to the aspect of change that is inherent for animations.
Thus, Lowe (1999, 2004) argued that animations increase the cognitive load compared to
static alternatives. Lowe explained that static and dynamic visuals involve the same
processing demands on the visuospatial level. Yet, compared to static visuals, animations
have another level—temporal—because information in animations changes over time.
This temporal level is what may impose additional, and potentially disruptive, processing
requirements. Similarly, Schnotz (2002) argued that if learners were capable of
performing mental simulations by themselves, then providing them with additional
external support in the form of animations might lead to shallow processing. He called it
an inhibiting function of animation when the ready-available mental model provided by
animations prevented learners from creating their own mental model of the phenomenon.
The second problematic issue with animations is that information in animation
changes over time, which in the literature is referred to as their “transitory” (Lewalter,
2003, p. 178), “transient” (Roberts, 2009, p.15), and “fleeting” (Betrancourt, 2005, p.
294; Tversky et al., 2002, p. 256) nature. On the one hand, the transitory nature of
animations can present a great advantage for information processing because it allows
learners to view the process of development of some educational concept instead of
reconstructing the flow of this development from static displays. On the other hand, this
transient nature can have a negative effect for learning. For instance, if learners missed or
misunderstood a part of an animation, it could affect the processing of the information
that followed this part. Also, viewing an animation could fail to provide enough time to
detect problems in comprehension. Thus, Lewalter (2003) found that learners who
created a mental image of a dynamic concept by reconstructing it from static images
could more easily recognize problems for comprehension, because they controlled the
speed of the learning process. Similarly, Tversky et al. (2002) argued that the dynamic
nature of animations made it hard to perceive the motion accurately and that presenting
71
discrete steps of a process as a continuous animation might prevent the learners from
distinguishing between these continuous steps.
The third issue with animations is the problem with methodological settings of the
studies. Tversky, Morrison and Betrancourt (2002) reviewed a number of studies on
animations from various disciplines, such as physics, mechanics, and math, and they
concluded that the lack of comparability between static and animated modes precluded
most experiments from demonstrating conclusive results on the benefits of animations.
Thus, they established three main problems with the comparison of static and animated
graphics. First, the content of the animated and static graphics often is not equivalent.
Second, the information in the animations is often conceptually better and finer than in
the static graphics because animations present both macrosteps and microsteps of some
processes. Third, in some studies the effect of animations is confounded by the effect
from interactivity, a feature that is usually not available in static displays. Thus, Tversky
et al. (2002) concluded that “when examined carefully, then, many of the so-called
successful applications of animations turn out to be a consequence of a superior
visualization for the animated than the static case, or of superior study procedures” (p.
254).
At this point, it seems that the beneficial effect of animations has not been
convincingly proven due to some problematic issues, such as the increased cognitive load
and misunderstandings due to the changing nature of the concepts presented in
animations. Although dynamic visualizations in general may not be more effective than
their static counterparts for cognitive processing, some researchers expect that this will
change when new experiments optimally employ the learners’ cognitive capacity to
process animations and enable learners to control the pace of the animation.
72
Adding a recording of a real teacher
The discussion in the previous section revolved around issues of enhancing an
instructional presentation with animations. This section deals with another type of the
enhancement, namely adding a recording of a human teacher to the instructional
presentation.
In the current research, there is a lack of studies that directly examine the benefits
of adding a recording of a human teacher to an instructional presentation. But despite the
lack of a theoretical basis, many grammar tutorials that are available online include a real
teacher. One possible explanation could be the learners’ belief in the role that the teacher
plays for language instruction. Without a doubt, students expect teachers to teach (Baines
and Stanley, 2000). This preference for teacher-centered classrooms is also evident from
the studies on learner autonomy in language learning. For instance, Cotterall (1999)
conducted a study on how learners’ beliefs affect their willingness to be autonomous
language learners. She found that learners are willing to accept the shared responsibility
in learning. Yet, at the same time, the majority of the study participants reported that they
expected the teacher to help them learn effectively. And in a study on autonomous
language learning with CD–ROMs, Felix (1997) found that although most students
enjoyed working with the program, they preferred the teacher-directed environment and
viewed the computer-based program only as a valuable supplement. Certainly, these
studies do not provide a solid foundation for stating that because some students are
“teacher-dependent” (Jones, 2001, p. 363), it would be prudent to enhance a computerbased instructional presentation with a real teacher persona. However, these studies
demonstrate that the teacher is closely connected to the learners’ perceptions of how
language learning should proceed.
The beneficial effect of adding a human character to an instructional presentation
is supported by social agency theory. According to Mayer (2005a), the availability of
73
social cues in an instructional presentation can prime a social response in learners that
will lead to deeper cognitive processing and better learning outcomes. Similarly, Wouters
et al. (2008) noted that social agency theory is based on the “media equation hypothesis
(Reeves & Nass, 1998), which claims that people view interaction with media, such as
computers and software, as interaction with humans and that social rules that apply for
human–to–human interaction therefore apply for human–to–media interaction” (p. 653).
Although few studies have investigated the effects of enhancing computer-based
programs with a human teacher persona, there is research that has examined
enhancements by means of human-like characters, such as animated pedagogical agents
and virtual humans. These studies refer to social agency theory as the theoretical basis for
predicting the positive effects of human-like characters. For instance, Moreno, Mayer,
Spires, and Lester (2001) conducted a series of experiments on the effects of animated
agents and a recording of a human. Their prediction of the positive effects of learning
with humans and human-like characters was based on the belief that these agents
personalize the task and, therefore, create a personal connection between the learner and
the task, thus, priming its social interpretation. In a similar study, Lester, Barlow,
Converse, Stone, Kahler, and Bhogal (1997) expected strong effects of animated agents
on learners’ perception of the learning experience because of their belief in the robustness
of human innate responses to psychosocial stimuli. In their research on the effects of
virtual humans, Park and Catrambone (2007) based their positive predictions for
“anthropomorphizing an interface” (p. 1054) with human-like characters on drive theory
by Zajonc (1965) arguing that the presence of a virtual human would increase the
participants’ drive level, which, in turn, would enhance performance on the study’s tasks.
Despite their predictions and theoretical argumentation, the studies mentioned above did
not find clear-cut support for the positive effects of human-like characters. Thus, Moreno
et al. (2001) found that the visual presence of an animated agent either did not produce
significant effects on learning outcome or produced them only partially. Similarly, they
74
found no significant effect for adding a recording of a real human to the narration.
Further, Lester et al. (1997) found significant effects only for animated agents with
higher levels of interactivity. Finally, Park and Catrambone (2007) found a positive effect
of virtual humans on easy tasks only.
The fact that the studies mentioned above could only demonstrate partial effects
of human-like teacher characters on learning has several explanations. Thus, Moreno et
al. (2001) referred to previous research to attribute a possible the lack of positive effects
of human-like agents to the interference hypothesis, which argues that adding
entertaining illustrations can hurt learners’ retention of the core material. Similarly,
Moreno (2005) reviewed the research on animated agents and explained the lack of
results by suggesting that the images of the human-like characters distract learners’
attention from learning because they serve as a seductive detail. Thus, adding an
animated agent violates the coherence principle, which states that extraneous information
should be excluded for an instructional presentation to be effective.
To conclude, the available research on the merits of adding a recording of a real
teacher to enhance an instructional presentation does not provide consistent grounds for
predicting a success for such an enhancement. The positive predictions come from social
agency theory, which argues that priming social responses by using human-like
characters enhances motivation to study; and, therefore, improves learning outcomes. At
the same time, however, adding a human-like character serves as a seductive detail that
distracts learners’ attention, and, therefore, decreases learning outcomes. Thus, while
some learners still prefer teacher-directed language classrooms, the question of whether
adding a human or human-like character in the teacher role to a computer program could
have a positive effect on learning is open to investigation.
75
Summary
The discussion in this section demonstrated that the field of instructional design
provides many guidelines on how to create an efficient multimedia presentation. The
main premise of creating a good–quality instructional tutorial is to minimize unnecessary
cognitive load to maximize the processing of important information. As to the question of
whether the effect of an instructional presentation could be enhanced by using animations
or a human teacher persona, the jury is still out. Both types of enhancements considered
here have advantages and disadvantages and their interplay could both aid and hurt the
process of learning.
Previous research
The literature review for this study covered several areas of research in the search
for a theoretical basis necessary to demonstrate that computer-based grammar tutorials
are valid learning tools and to find practical guidelines to assure quality both in terms of
methodology of teaching grammar and instructional design. This section deals with the
previous research on computer-based tutorials, which sets the stage for the current study.
The first study (Moreno et al., 2001) has been mentioned previously with regard to the
research on animated agents. One of their experiments dealt with the recording of a real
human as part of the multimedia presentation on a biology topic. The next two studies
(Caplan, 2002; Roche & Scheller, 2008) compared the effect of an animated presentation
and a static presentation on the acquisition of German grammar. Finally, the last study
(Kolesnikova, 2011) presents a description of the pilot study for the present investigation.
All of these studies, specifically their methodologies, results, and limitations, contributed
to the theoretical and practical considerations for the current study.
76
Moreno et al. (2001)
Although the study by Moreno et al. (2001) focused primarily on the effects of
using animated pedagogical agents in a multimedia presentation, the researchers replaced
the cartoon-like character with a video of real human in one of the experiments to
investigate whether a human agent could provide additional visual cues to promote
deeper learning. Moreno and colleagues referred to the psychological research
(Ellsworth, 1975; Kleinke & Pohlen, 1971) to suggest that direct eye contact has strong
attention-getting attributes. Additionally, they referred to Velthuijsen, Hooijkaas, and
Koomen (1987), who argued that in videoconferencing adults prefer to communicate via
a video image that has a higher amount of eye contact. As a result of the experiment,
however, Moreno et al. (2001) found no significant differences between the group that
had a video of a human and the one with only a narration. This outcome disconfirmed the
hypothesis that an expressive human face would prompt learning mechanisms similar to
face–to–face communication. Nevertheless, Moreno et al. (2001) concluded from this
result that although the visual presence of a human did not promote deeper learning, the
positive finding was that the image did not distract learner from processing the relevant
material and, therefore, did not interfere with learning.
Caplan (2002)
In her dissertation research, Caplan (2002) investigated the effect of animations
on the acquisition of modal verbs in German. The main theoretical considerations for
Caplan’s study were based on Paivio’s dual–coding theory, Mayer’s cognitive theory of
multimedia learning, and the available animation research. Caplan (2002) chose German
modal verbs (their meaning, conjugation type, and role in sentence-level syntax) as the
topic of instruction, arguing that their visuospatial characteristics could be efficiently
77
represented by dynamic visuals, such as animations. To test her hypothesis about the
superiority of animations for learning grammatical concepts, Caplan (2002) developed
two instructional presentations, one with a static text and the other with an animated text.
The data collection procedures spanned two days. On the first day, the subjects received
information about the study and took a pretest; on the second day, they watched the
presentations on a big screen and filled out a posttest immediately after the treatment.
The results did not demonstrate a significant difference between the two groups due to
the ceiling effect, in which both groups achieved high scores on the posttest. Significant
differences were found only on the satisfaction questionnaire when the animated
presentation elicited more positive comments than the static one. Based on the study
findings, Caplan (2002) concluded that there was not enough evidence to suggest that
animation was beneficial for learning; however, it was evident that it was not detrimental.
However, the lack of the conclusive results in Caplan’s (2002) study might be attributed
to a number of limitations. First, the sample of participants was too small (total number
of subjects 41) to provide adequate statistical power. Second, the focus on modal verbs
presented a content area that was too limited to enable the researcher to explore the effect
of animations on more challenging tasks.
Roche and Scheller (2008)
The study by Roche and Scheller (2008) focused on various modes of grammar
presentation and their effect on learning. Similar to Caplan (2002), this study included
static and animated displays for teaching two–way prepositions in German. However,
Roche and Scheller (2008) added a comparison of traditional and cognitive/functional
approaches for grammar explanations. The traditional approach states when the
prepositions are used and with what case, whereas the cognitive/functional approach
provides a description of the conceptual marking in the context of the utterance that
78
signals the need for one preposition or the other. Thus, in this study there were four
treatment conditions that combined two types of explanations (traditional vs.
cognitive/functional) with two types of visual displays (static vs. animated). The data
collection proceeded in two sessions. During the first session, the participants completed
the pretest, worked with the assigned treatment program, and took the immediate posttest.
A week later, the participants completed a delayed posttest. Due to a small number of
subjects, only a non-parametric comparison was possible for the data in this study. The
findings demonstrated that only the functional/animated group significantly outperformed
other groups on both posttests. Roche and Scheller (2008) concluded that only animations
constructed according to a specific set of didactic explanations that follow the
cognitive/functional approach produce positive learning effects. Although the study
achieved some significant findings, it faced a number of limitations that might have
influenced the results. Similar to Caplan (2002), the study had a small number of subjects
(total number of participants in four experimental groups was 98). In contrast to Caplan
(2002), in which high posttest scores created a ceiling effect, the participants in this study
produced a high number of errors on the tests. The researchers suggested that the
participants tended to make errors when they failed to understand the context of the
testing sentences appropriately.
Kolesnikova (2011)
The study described below was carried out as a pilot study for the investigation
reported in this dissertation. It should be mentioned that the purpose and the design of the
pilot study differ from those of the main study, and the instruments used in the pilot study
were completely redesigned for the main study.
The pilot study reported in Kolesnikova (2011) aimed at investigating the aspects
of computer-based grammar instruction that prove at least as effective as the teacher-
79
delivered instruction in the face–to–face classroom. Thus, the main prediction of the
study was that computer-based grammar instruction could be at least as effective as
traditional face–to–face instruction. The additional prediction of the study was that some
modes of computer-based tutorials, such as animated multimedia presentations, could be
more beneficial than other types of grammar instruction.
For the pilot study, I developed identical instructional modules for three modes of
delivery: (a) by a teacher in a regular classroom, (b) by means of an animated computerbased tutorial, and (c) by means of a static computer-based tutorial. The instructional
modules were developed for three target structures in German: regular verb conjugation,
the accusative case, and separable–prefix verbs. The instructional modules were based on
identical wording, examples, and practice exercises. The practice exercises in the teacher
module were carried out as pair work and were checked together as a group, whereas the
practice in both computer-based modules involved type-in tasks for the verb conjugation
and the accusative case, and drag–and–drop tasks for the separable–prefix verbs.
The participants in the study were elementary German students who stayed as
intact classes to receive the study treatment in the form of different instructional modules
or to act as a control group. Because the participants had no prior knowledge of the target
structures, pretesting was not included into the data procedures. According to the initial
design of the study, the three target structures were supposed to be introduced to the
groups of participants by means of three different modes, namely teacher-delivered,
computer-based static, and computer-based animated. However, the initial design
underwent some changes after the start of the study procedures due to the following
reasons: (a) One of the teachers in the teacher-delivered condition did not follow the
instructions, and (b) several participants displayed high levels of frustration with the
difficulty of the materials. As a result, all modules for one of the target structures (the
accusative case) and the static module for another target structure (verb conjugation)
were eliminated. Thus, the final design included the three experimental modes for the
80
separable-prefix verbs (static, animated, and teacher-delivered) and only two
experimental modes for the verb conjugation (animated and teacher-delivered). The data
from a total of 65 participants, including those in the control group, were included in the
analysis.
The statistical analysis of the data demonstrated no conclusive results with regard
to the effect of different modes of presentation on learning outcomes. Thus, the ANOVA
demonstrated no significant differences between the groups. However, when I used t-tests
to compare the groups, no significant differences were found between the static and
animated condition (p=.100), but a significant difference in learning outcomes was found
between the animated and teacher-delivered condition for both verb conjugation (p=.013)
and separable-prefix verbs (p=.043). Further, the ANOVA on both experimental and
control groups demonstrated that none of the treatment groups outperformed the control
groups. It should be noted though that the teachers in the control groups provided very
explicit and prolonged explanations of the target structures to the students.
The statistical analysis with regard to the effect of the different modes of
presentation on learners’ satisfaction demonstrated results that were also mixed. Thus,
although the satisfaction ratings for the modules on verb conjugation were not
significantly different from each other, the satisfaction ratings for the animated module
on separable-prefix verbs were significantly higher than those for the static and teacherdelivered condition.
Additional analysis of the intervening variables, such as gender, age, reported
familiarity with the target structures, motivation to study German, and native language,
did not reveal any significant differences, which suggests that the results of the learning
outcomes and satisfaction ratings were influenced only by the treatment conditions
without any additional intervening effects.
In summary, the pilot study (Kolesnikova, 2011) did not find any statistically
significant differences between the effects of the animated and static presentation modes,
81
which is in line with previous similar research (Caplan, 2002; Roche & Scheller, 2008).
Nevertheless, the importance of the pilot study was that its limitations highlighted the
changes necessary to improve the design of the main study (see Chapter 3 for details).
Summary
In summary, the previous research demonstrates that the study described in this
document is in no way a pioneer of research on computer-based grammar tutorials,
animations, or human teaching persona. This study, however, acknowledges the
limitations and methodological struggles of the previous research and employs a highly
controlled design to isolate the effect of tutorials on learning.
Summary
This chapter outlined the background for this study that aims at investigating
whether computer-based grammar tutorials could be effective and welcomed learning
tools. The review of the literature explored a general issue of whether the formal
grammar instruction is beneficial for language learning. It was found that voices from
both theory and practice are being raised in favor of more conscious grammar learning.
This support comes both from educators, who believe that formal grammar learning can
be more helpful in increasing accuracy than meaning-based approaches, and from
learners, who often associate knowledge of grammar with success in language learning.
Support of formal grammar instruction mostly refers to methods that draw
learners’ attention to form in an incidental manner because they do not interfere with the
main communicative orientation of the language instruction. Nevertheless, isolated rulebased grammar does not seem to be a worthless relic, and some educators believe that
this type of grammar instruction can help overcome the shortcomings of incidental focus–
82
on–form approaches and provide a systematic review of grammar to enhance learners’
understanding of the foreign language they study.
The insights from the field of computer-assisted language learning continued the
discussion on the merits of computer-based grammar tutorials. The review of the
available research and studies demonstrated that after marking the beginnings of CALL
tutorials disappeared from the educational discussions and gave way to more
sophisticated and communication-oriented applications. Nevertheless, tutorial CALL
seems to be making a comeback after shortcomings in the new approaches pointed back
to the advantages of tutorial applications for CALL research and practice. In particular,
employing grammar tutorials for language learning could help to integrate form-focused
activities into the communicative classrooms in a flexible manner, relieving teachers of
the tedious and challenging task of teaching grammar and accommodating learners’
individual learning preferences.
Grammar tutorials seem to be efficient from the viewpoint of language learning
and computer-assisted grammar learning. However, the research in these areas is more
concerned with the content and function of the grammar tutorials than their appearances.
But because the success of a grammar tutorial also depends on its design, this literature
review provided insights from the field of instructional design to delineate guidelines for
ensuring that the appearance of a tutorial does not impede its effect on learning.
Additionally, a survey of the available literature on ways to enhance tutorials with
animations or human characters suggested that it is premature to draw any conclusions
about their effect.
Finally, the review of the previous research points out a lack of comprehensive
studies that investigate effects of computer-based grammar tutorials. Available studies
give mixed results or are limited in drawing conclusions due to methodological
limitations.
83
Thus, the review of the available literature demonstrates that there is enough
ground to expect that computer-based grammar tutorials could have a beneficial effect on
learning and rate high in learning satisfaction. However, lack of the studies that focus
specifically on these issues warrants further research.
84
CHAPTER 3
METHODOLOGY
My research investigated the effects of different modes of computer-based
grammar tutorials on learners’ performance and satisfaction. To achieve this purpose, I
developed experimental grammar tutorials in three modes of presentation: (a) a static text
with a voice-over narration (ST); (b) an animated text with a voice-over narration (AN);
and (c) a video recording by a real teacher (RT). The effects of these three modes were
tested with elementary German college students in two parallel experiments on two target
structures of German grammar: regular verb conjugation (VC) in Experiment 1 and
separable-prefix verbs (SPV) in Experiment 2.
In this chapter, I describe in detail all of the study procedures of the present
investigation. I start by restating the research questions for this study and continue by
describing the research design, including information about participants, materials, and
testing instruments, detailing the treatment procedures of this study, and describing the
steps taken to assure the consistency of procedures and materials.
Research questions
The purpose of this study was to examine the effects of three different modes of
computer-based tutorials on learning and learners’ satisfaction. The following research
questions addressed the two directions of the present investigation: (a) the effects in
terms of learning outcomes of the different modes of the tutorials; and (b) the satisfaction
levels of the students towards the different modes of the tutorials.
Research question 1: Do the three modes of computer-based grammar tutorials
(ST, AN, and RT) produce different effects on learners’ knowledge of the target
structures (regular verb conjugation and separable-prefix verbs in German)?
85
Research question 2: Do learners report different satisfaction ratings following the
three modes of computer-based grammar tutorials under consideration in this study?
What factors or considerations influence the learners’ satisfaction ratings for the
tutorials?
Pilot studies prior to the main study
There were several piloting procedures prior to this study. The main two pilots
(Pilot I and Pilot II, hereafter) were conducted to test the materials and procedures for the
main study. Prior to each pilot, prepiloting of the materials was conducted with small
numbers of volunteers.
Pilot I
This pilot study tested the initial design for the main study. As explained in
Chapter 2, the design and the instruments used in Pilot I were completely reworked for
the main study, based on the results of the pilot. The design of Pilot I included a
comparison between traditional teacher-delivered grammar instruction in the classroom
and two types of computer-based tutorials (static and animated). The pilot investigated
the differences between these modes of instruction on the Elementary German I (n=65)
students’ knowledge of the three target structures of German grammar. As mentioned
previously, the results of the pilot study were affected by several methodological
difficulties. In this sense, the main importance of this pilot study was that it highlighted
the changes necessary for an improved and more controlled design of the main study.
First, the comparison between the human teacher and the computer-based instruction
involved too many variables beyond my control, thus diminishing the overall validity of
the research. Second, the presentation of previously unfamiliar content to beginners
produced high levels of stress that negatively influenced the learning outcomes and the
86
satisfaction ratings. Finally, the observations during the treatment sessions demonstrated
that because participants could control the pace of the presentation they differed in the
amount of time they spent on the rule explanations and practice, which could have
contributed to the inconsistency in the results. In short, this pilot study demonstrated that
a valid comparison of the different modes of grammar presentation needed to rely on a
more controlled design to arrive at consistent results.
Pilot II
This pilot focused on ensuring the quality of the instruments for the current study
after I re-worked them based on the results and observations from Pilot I. Because it took
a different amount of time to rework the materials, the Pilot II procedures took place in
spring 2010 and in the early fall 2010. During the first stage in spring 2010, I piloted the
new versions of the tests for regular verb conjugation and separable-prefix verbs, as well
as for the demographic questionnaire. Upon agreement with the instructors, I gave a test
to students enrolled in the Elementary German I course (n=52) during the eighth week of
the semester to investigate their knowledge of the target structures and examine the
quality of the testing materials. Ten days later, I repeated the testing to see whether there
were any changes based on the test–retest completion. There were no apparent
differences in the test scores or in the test difficulty ratings. However, a statistical
analysis and the comparison to this study were problematic because the posttests included
both old items from the pretest and new items. This experience influenced my decision to
enable the analysis by having a pretest and a posttest with identical items for this study.
During the second stage in early fall 2010, I piloted the preliminary versions of
the new study tutorials and the interview script with a group of volunteers from
Elementary German II (n=8). Based on the results of this part of the pilot, I finalized the
content and the recording of the tutorials and fine-tuned the questions for the interview.
87
Participants’ characteristics and sampling procedures
In this section, I describe the subject populations, from which I recruited the
participants, the sampling procedures, the differences between course sections within and
between the participating institutions, and the participants’ characteristics.
Subject population 1
A public Midwestern university (MU, hereafter) was the main site for the study.
Subjects for this study were recruited in fall 2010 from the students who were enrolled in
four sections of Elementary German I. The population enrolled in these sections totaled
96 students. Over 80 of them agreed to participate in this study. All four sections of
Elementary German I followed the same syllabus based on the same textbook
(Vorsprung, 2nd ed.), and the teaching assistants for these sections were supervised by
the same faculty member.
Subject population 2
Another public Midwestern university (AMU, hereafter) was an additional site for
the study. I carried out only the main study procedures at AMU, leaving out the voluntary
interviews with the participants. Subjects for the study were recruited in fall 2010 from
the students enrolled in two sections of German Language and Culture I. The population
enrolled in these sections totaled 44 students. Over 30 of them agreed to participate in
this study. Both sections of German Language and Culture I followed the same syllabus,
used the same textbook (Kontakte), and were taught by the same instructor with the help
of two different teaching assistants. In this study, both AMU sections are referred to as
one section due to the course structure and schedule. In contrast to the MU, where
students were enrolled in the same section for all class meetings, the AMU students were
88
enrolled in the same section for three days a week (when the main instructor was
teaching) and in two different sections for the other two days of a week (when teaching
assistants were teaching).
Sampling procedures
To invite students to participate in this study, I approached them during their
regular class time one week before the beginning of the study procedures. During this
meeting, I explained the study to the students using the information sheet approved by the
Institutional Review Board at the main site of the study (See Attachment A). I informed
the students about the purpose and the procedures of the study and that participation in
the study was completely voluntary and that it would have no effect on their scores in this
class. To attract as many participants as possible I especially emphasized the following:
(a) all study procedures and materials were based on their current syllabus; (b) most study
procedures would take place during their regular class time; (c) the topics under
investigation in the study were part of their chapter and final exams; (d) these topics
belonged to the core of German grammar; and, finally, (e) participation in the study
would not affect their course grade in any way.
My study received exempt status from the IRB because it met the requirements
for complete anonymity of the subjects. Thus, subjects were not required to provide their
signatures to indicate their agreement to participate in the study. Instead, I asked the
students who agreed to participate to create a unique nickname and use this nickname in
all study procedures instead of their actual names to protect their privacy. The students
who chose not to participate did not need to create a nickname. However, I informed the
non-participating students that they could still be required to watch the grammar tutorials
during the study because the course supervisors considered it to be additional language
practice and also because the study procedures were to take place during regular class
89
time. I further explained that non-participating students could choose to fill out the
quizzes for more practice but they would need to leave the nickname field blank for these
materials to be excluded from the data analysis. Table 3 below presents the total number
of the students enrolled in the participating sections and the number of those who
participated during various stages of the study.
Eligibility and exclusion criteria
For this study, all participants enrolled in the above-mentioned German courses
were eligible to participate. No exclusion criteria applied. However, the data from two
participants were excluded right after the data collection for two reasons: Both
participants differed from other participants in age (56 and 63 years old), and it took both
of them longer to fill out the posttests compared to other students in the group.
Table 3. Number of participants in different stages of the study
Experiment 1 (VC)
Experiment 2 (SPV)
Enrolled
Pretest
Treatment and
posttest
Pretest
Treatment and
posttest
Section 1 (MU)
26
19
19
20
18
Section 2 (MU)
28
24
24
21
25
Section 3 (MU)
23
18
21
21
21
Section 4 (MU)
17
13
11
12
11
Section 5 (AMU) 44
25
30
25
29
Total (n)
138
99
105
99
104
Total (%)
100%
72%
76%
72%
75%
Note: The numbers presented in this table are based on the overall counts of the participants. If
the overall numbers are the same for the pre- and post-test participation, it does not
necessarily mean that the same participants were present on both days.
90
Demographic characteristics
The data for the demographic characteristics of the subject population are
presented in Table 4. As can be seen from the table, the participants in the subject
population represented a mostly homogeneous group of students. There was an
approximately equal number of males and females of the same age. Most participants
were monolingual undergraduate students whose native language was English. Most
students were taking German for the first time and did not have any other experience with
German. There was a small number of German majors, and the main reason for taking a
German class was either a language requirement or a personal interest. Finally, most
students had a satisfactory level of academic achievement in terms of their overall GPA.
Research design
This section describes the research design of this study focusing on the target
structures, instruments, and experimental conditions.
Target structures
For this study, I selected two topics of German grammar as the target structures
for this investigation: regular verb conjugation in the present tense (VC) and separableprefix verbs and two-verb constructions (SPV). Instruction on these topics was part of the
regular first–semester German syllabus at both institutions. Moreover, these structures
were introduced to the learners at the beginning of the semester. Below, I summarize the
rules for the target structures (the complete rule explanations as they were presented in
the tutorials can be found in Appendix B for VC and Appendix C for SPV).
91
Table 4. Demographic characteristics of the participants
Total
By section
MU 1
MU 2
MU 3
MU 4
AMU
n
%
n
n
n
n
n
Number of respondents
105
100%
19
23
21
13
29
Average age
104
99%
20
20
21
21
23
female
50
48%
13
10
9
6
12
male
55
52%
6
13
12
7
17
freshman
30
29%
7
8
9
3
3
sophomore
21
20%
4
6
5
3
3
junior
27
26%
3
4
4
2
14
senior
24
23%
5
5
3
2
9
graduate
3
3%
0
0
0
3
0
other
0
0%
0
0
0
0
0
English
87
83%
17
18
14
10
28
other
18
17%
2
5
7
3
1
yes
10
10%
1
4
3
1
1
no
95
90%
18
19
18
12
28
yes
97
92%
17
23
18
12
27
no
8
8%
2
0
3
1
2
Gender
Student status
Native Language
Bilingual
First semester of German
Other experience (study abroad, self–study, etc.)
yes
20
19%
2
7
3
2
6
no
85
81%
17
16
18
11
23
major/minor
6
6%
2
1
1
0
2
general language requirement
40
38%
9
9
3
4
15
personal interest
52
50%
8
11
17
8
8
other
7
7%
0
2
0
1
4
Main reason for taking German
92
Table 4 (continued)
Expected grade in German
A
52
50%
9
13
13
5
12
B
40
38%
8
8
5
6
13
C
4
4%
1
1
1
0
1
D
1
1%
0
0
0
0
1
F
0
0%
0
0
0
0
0
not sure
8
8%
1
1
2
2
2
lower than 2.5
4
4%
1
0
0
0
3
2.5–3.49
51
49%
10
12
9
6
14
3.50–4
41
39%
8
10
8
6
9
over 4
1
1%
0
0
0
0
1
not sure
8
8%
0
1
4
1
2
GPA range
Rules for regular verb conjugation (VC)
The main rule is to drop the infinitive ending –en and add a set of personal agreement
endings to the verb stem. This rule, however, has a set of sub–rules depending on the last
letter of the verb stem. Thus, if the last letter of the stem is –t or –d, an additional –e is
added to the endings for the second–person singular and plural and third–person singular.
The same sub-rule applies to verb stems that end in the consonant clusters –gn, –chn, –
dm, or –ffn. However, a different sub-rule applies if the stem ends in –s, –z, –ss, or –ß: the
personal ending for second–person singular is –t rather than –st.
93
Rules for separable prefix verbs (SPV)
The main rule is to separate the prefix from the main verb and place it in the last
spot in the sentence, whereas the main verb always takes the usual second spot in the
sentence. A sub-rule here states that prefixes that resemble prepositions or adverbs should
be separated from the main verb. Prefixes that do not resemble other parts of speech, such
as ver–, zer–, er–, and be–, are considered non–separable and should stay connected with
the main verb. Another sub-rule concerns the position of separable-prefix verbs or twoverb constructions after modal verbs and the verb werden. If a sentence has a modal verb
or the verb werden, the whole verb (separable part together with the main verb) is placed
at the very end of the sentence in its infinitive form. This explanation of the rules does
not present information about how the stress on the prefix influences whether it should be
separated or not. Based on consultations with content experts, I did not include this
information because it is too advanced for learners at the beginning level of German.
Initial instruction
As mentioned above, the tutorials presented a review of the grammar topics to the
study participants rather than explain new information. Thus, the participants were
already familiar with the target structures by the time this study started. Below, I provide
a short overview of the initial instruction that students received in their regular classes
approximately 5–7 weeks before this study.
At MU, both target structures were presented at the beginning of the semester.
The regular verb conjugation was discussed in Chapter 2 of Vorsprung! in the fifth week
of the semester. The separable-prefix verbs were also discussed in Chapter 2, but in the
sixth week of the semester. For each target structure, the MU instructors presented short
94
grammar explanations, provided examples in German, and proceeded to practice
exercises (fill–in–the–blanks, dialogues, interviews, etc.).
At AMU, the target structures were also presented at the beginning of the
semester. Both topics were discussed in Chapter 1 of Kontakte in the fifth (VC) and sixth
(SPV) weeks of the semester. Both topics were introduced through pictures, drawings,
and gestures and was practiced using various learning activities (e.g., information–gap
activities, autograph activities, interviews). After the practice activities, the instructor
gave a brief grammar explanation in English at the end of the class, and students were
assigned to read a more detailed explanation as homework.
In short, the initial instruction was similar at both participating institutions.
Although there was some explicit grammar instruction in English on the target structure,
most instruction and practice followed the communicative teaching approach.
Instruments
For this study, I developed three types of instruments: (a) instruments for the
delivery of instruction (six experimental computer-based grammar tutorials); (b)
instruments for measuring learning outcomes (pre- and post-tests); and (c) instruments
for collecting data other than learning outcomes (a demographic questionnaire, design
and satisfaction questionnaires, and an interview outline). In the sections below, I
describe how I developed and tested each instrument before including it in the study.
Instruments for the delivery of instruction
For this study, I developed three experimental computer-based tutorials for each
target structure:

VC–ST: a tutorial on the regular verb conjugation with a set of static slides and a
voice-over narration.
95

VC–AN: a tutorial on the regular verb conjugation with a set of animated slides
and a voice-over narration.

VC–RT: a tutorial on the regular verb conjugation with a set of static slides and a
recording of a real teacher.

SPV–ST: a tutorial on the separable-prefix verbs with a set of static slides and a
voice-over narration.

SPV–AN: a tutorial on the separable-prefix verbs with a set of animated slides
and a voice-over narration.

SPV–RT: a tutorial on the separable-prefix verbs with a set of static slides and a
recording of a real teacher.
Instructor in the tutorials
For the sake of consistency, I worked with the same teacher on all six tutorials.
This teacher was a female in her early 30s. She was a native German speaker with nativelike English. She had approximately ten years of experience teaching German in various
academic settings. At the time of the study, she was completing her Ph.D. and was
employed by the MU German program as a teaching assistant. This instructor
volunteered to help me and agreed to be trained and be recorded for the tutorials. She was
not the instructor in any of the sections in the study.
Content of the tutorials
First, I developed a scripted lesson for the tutorials for each target structure (VC
and SPV) where I provided grammar explanations based on my own knowledge of
German grammar and the explanations in the course textbooks. Then I sent the lesson
scripts to two content experts (experienced German language teachers) for review. Upon
receiving their approval and making requested changes, I developed a MS PowerPoint
96
presentation for each target structure based on the lesson script. The three VC tutorials
were based on the same script for the regular verb conjugation and all three SPV tutorials
were based on the same script for the separable-prefix verbs, thus making the content of
all tutorials absolutely identical in terms of wording of the rules, amount of information,
and examples used, as well as the text enhancements (e.g., coloring, bolding).
Practice within the tutorials
The tutorials not only included the rule explanations for the target structures with
examples but also short practice segments that followed the presentation of each rule. The
practice part was included in the tutorials for two reasons. First, for pedagogical reasons
practice is necessary when learning or reviewing information, and, second, practice break
up the continuous flow of metalinguistic information about the rules. The practice
assignment for the VC tutorials required the learners to conjugate a verb based on the rule
that was explained right before the practice. The practice assignment for the SPV tutorials
was to fill in the blanks in a sentence with a verb provided. Because the tutorials were
continuous video recordings, there were no opportunities for typing and the learners had
to complete the tasks in their head. In all tutorials, after the practice task was announced
to the learners they had approximately 18 seconds to respond. After that, the correct
responses were presented to the learners.
Quality of the content
As explained before, the content experts assessed the quality of the content,
including the quality of grammar explanations, before I finalized the tutorials for the
experiment. During the study, I also asked the instructors of the participating sections if
they were willing to watch the tutorials to assess the quality of the content. Five out of
six instructors agreed to participate during Experiment 1; and four out of four instructors
97
agreed to participate during Experiment 2. I gave them instructions to a web page
(Appendix D) that explained the differences among the three modes of the tutorials.
Because the content of all tutorials was identical, the mode of the tutorial did not make a
difference for the purpose of evaluating the quality. For that reason, the instructors were
free to select the tutorial in the presentation mode that appealed to them based on the
description. After watching the selected tutorial, the instructors filled out a short survey
(see Appendix E) that, among other questions, asked the respondents to evaluate the
content. The results of the survey for both VC and SPV tutorials demonstrated that out of
five instructors who evaluated the VC tutorials, three strongly agreed and two agreed that
the explanations in the tutorials were appropriate and valid for explaining these
grammatical topics and that these tutorials were appropriate for the elementary German
learners. For SPV tutorials, four out of four instructors strongly agreed that the
explanations were valid and appropriate for the students at this level.
In addition to the evaluation of the content by the content experts and instructors,
I included a question about quality of the content into the satisfaction questionnaire for
the student participants. The students provided their perspectives on the quality of the
content by indicating their agreement with the statement “The rule explanations in this
presentation were easy to follow and understand” on the scale from “Strongly disagree”
(1) to “Strongly agree” (5). Overall, most learners agreed or strongly agreed with this
statement (see Table 5). Overall, the results demonstrated that the content was
satisfactory for the purpose of the review from both teaching and learning perspectives.
Table 5. Participants’ evaluations of the quality of rule explanations
n
Mean
ST
AN
RT
VC
101
4.55
4.51
4.60
4.55
SPV
110
4.35
4.47
4.4
4.17
98
Design and production of the tutorials
In this section, I summarize the design decisions (for more information see
Chapter 2) and describe the details of the production of the tutorials in this study.
Design
The design of the programs followed the main principles described in Chapter 2.
Thus, in line with the modality principle that suggests that a balanced multimodal
presentation is more effective than a similar unimodal one, all presentations included the
information in the textual and auditory mode. The tutorials included only relevant
elements both in terms of content and design in accordance with the principles of
relevance, which imply that irrelevant information can add to the cognitive load. Further,
all presentations used identical cueing techniques, such as colors, bolding, and font size,
to highlight important information in a consistent manner. In line with the layout
suggestions, the content was presented in a step–by–step manner in which each step was
laid out on a separate slide to avoid cognitive overload for students while processing the
information. Finally, the narration was written in a conversational, rather than strictly
scientific, style to acknowledge the importance of using social cues in an instructional
presentation.
Production
I used a variety of software to produce the six experimental tutorials used in this
study. Each mode of the tutorial (ST, AN, RT) was produced in the same manner using
the MS PowerPoint presentations described earlier. In this section, I describe the
production details for each mode of presentation. At the end of this section, I provide a
99
discussion about the time involved in the production of these tutorials. The screenshots of
the tutorials are presented in Figure 3.
Static tutorials (ST)
The static mode of the tutorials was the most basic and, thus the least timeconsuming to create. I only used one software program—Camtasia Studio 7—to produce
these tutorials. I launched the PowerPoint Presentation on the computer and changed the
slides on click while the instructor read the accompanying script into the microphone.
The contents of each slide appeared all at once, and no animation was used. Both the
activity on the screen and the voice of the instructor were recorded. After several
recording trials, I edited the preliminary version of the tutorial. After this version was
piloted with several volunteers (Pilot II), we recorded the final version. The tutorial was
produced as an mp4 video file and I used Adobe Dreamweaver 8 software to embed it
into my web page located on the university server. The tutorials can be found here:
VC: myweb.uiowa.edu/akolesni/vcst.html (or http://youtu.be/RaMkHqJbBvk)
SPV: myweb.uiowa.edu/akolesni/spvst.html (or http://youtu.be/8Ihj_OWx3pA)
Animated tutorials (AN)
Two types of animations were used in this mode of the tutorials. For the simple
animations, I used the built-in capabilities of the MS PowerPoint software to make the
text on each slide appear line by line on click. For more complex animations where text
was animated in a more cartoon-like manner, I used Adobe Flash 8 software. At the time
of the recording with the instructor, the basic PowerPoint presentation included short
Flash files with animations. To record the tutorial with the Camtasia Studio 7, I launched
this presentation and went through the slides while the instructor read the script. It took
more trials to record the preliminary version for this mode of the tutorial because we
100
needed to coordinate the narration with the animations. Again, this version was piloted
before the final version was produced. The tutorial was produced in Camtasia Studio 7 as
an mp4 video file and embedded into my web page using Adobe Dreamweaver 8. The
tutorials can be found here:
VC: myweb.uiowa.edu/akolesni/vcan.html (or http://youtu.be/onMuYhhaASs)
SPV: myweb.uiowa.edu/akolesni/spvan.html (or
http://youtu.be/nEjWTMtKWVc)
Video recording of a real teacher (RT)
The teacher mode of the tutorial was the most challenging production work
because it included the video recording of the instructor, not just her voice. Before
making the recording for the tutorials, I considered and tried out several options to
project the content in a real classroom, such as SmartBoards, video projectors, and even
separate recordings of the teacher and the content. Finally, I decided to use the Polysnap
software to display the PowerPoint on a big-screen TV in one of the classrooms. The
instructor stood to the right of the screen in front of the video camera, pointing to the
information on the screen while reading the script. Because it was important that the
wording in all modes be identical, I printed the enlarged script on paper and placed the
sheets in front of the teacher. After several trials, I edited the recording using iMovie
software. After producing and piloting the preliminary version, we re-recorded the lesson
and the final version was produced from iMovie as an mp4 file and was embedded into
my web–page using Adobe Dreamweaver 8. The tutorials can be found here:
VC: myweb.uiowa.edu/akolesni/vcv.html (or http://youtu.be/qT_C_MxiPFM)
SPV: myweb.uiowa.edu/akolesni/spvv.html (or http://youtu.be/ZJ0iorTqCqQ)
101
Figure 3. Scrreenshots off the study tutorials
102
Duration of the tutorials
For each target structure, each of the six tutorials lasted about 11–12 minutes. See
Table 6 for the exact duration.
Time commitment
For this study, I developed the experimental tutorials myself to make them
consistent with each other. While working on the tutorials, I kept track of the time I spent
at various stages of the development (see Table 7). Although the time tracking was not
necessary for the study or its procedures, I considered it instrumental in exploring the
connection between the time spent on producing a tutorial and its effect on learning
outcomes and satisfaction.
Table 6. Duration of the study tutorials
VC
SPV
Tutorial mode
ST
AN
RT
ST
AN
RT
Duration (min:sec)
10:53
10:56
10:54
12:31
12:31
12:10
The first stage was writing the script of the lesson plans for both tutorials (VC and
SPV). The time indicated in Table 7 reflects researching the topics, the actual writing of
the script, discussing the script with my content experts (in person and via email), and
revising the script based on their feedback.
The second stage was to create a basic PowerPoint presentation that visually
presented the script in a series of slides. I created the basic presentation for each target
103
structure (VC and SPV). These presentations became the basis for all three tutorial modes
(ST, AN, and RT). The time indicated in Table 7 reflects all work on the PowerPoint
presentation, including the revisions during later stages.
Table 7. Development and production time for the study tutorials
Target structure
VC
SPV
Tutorial mode
ST
Script
5 hours
5 hours
Basic PowerPoint
3 hours
3 hours
Production of the
preliminary versions
4 hours
10 hours
10 hours
2 hours
6 hours
8 hours
Revisions and
production of the
final versions
3 hours
4 hours
8 hours
2 hours
4 hours
8 hours
Total development
time
15 hours
22 hours
26 hours
12 hours
18 hours
24 hours
AN
RT
ST
AN
RT
The time I spent on the third stage—production—differed in duration depending
on the type of the tutorial. However, all time estimations include the rehearsal time with
the instructor, at least two recording sessions, editing, and production of the mp4 files.
Despite the similarities in the steps, the time of production differed. Thus, the static
tutorials for both target structures were the least time-consuming because they included
only the basic PowerPoint and the recorded narration. The animations took more time
compared to the static text because each animation had to be created separately.
However, the video turned out to be the most time consuming endeavor because it
required several rehearsal and recording sessions.
104
After I created the preliminary versions of the tutorials, I carried out a short pilot
test with several students who volunteered to work with the tutorials for monetary
compensation (Pilot II). During this pilot test, the participants watched one tutorial and
provided feedback about the features of the tutorial that needed to be improved. In
addition to working with students on evaluating the preliminary versions of the tutorials, I
showed them to two faculty members who were familiar with my research. I used the
feedback and suggestions from students and faculty to make necessary revisions in the
final stage of the production.
The final stage of the production included several procedures. First, I made
necessary revisions in the script and the basic PowerPoint and fixed the problems with
some animations. Then, I re-recorded two versions of each tutorial and edited them into
the final versions.
As can be seen from Table 7, the production of the SPV tutorials was less timeconsuming than that of the VC tutorials. Because the VC tutorials were first, by the time I
developed the SPV tutorials I was more experienced using the software programs and
needed less troubleshooting time. Also, the instructor who was featured in the tutorials
felt more comfortable being recorded because she was already familiar with the routine.
Certainly, the time commitment presented here is based in part on my experience with the
software and the basics of video production.
Quality of the design and production
Because I developed the tutorials for this study myself, I needed to ensure that
the quality of the production was satisfactory so that it would not interfere with the
satisfaction ratings from working with the tutorials. In addition to piloting the tutorials
with several volunteers and showing the tutorials to two faculty members, I created a
design questionnaire that the participants filled out after watching the tutorials. The
105
design survey (described in detail below) included several questions based on 5-point
Likert scale ratings. As can be seen from Table 8, the mean rating of the overall design
was 4.35 out of 5 for the VC tutorials and 4.09 for the SPV tutorials. Similarly, the
ratings of the color scheme and the duration of the tutorials was approximately 4 out of 5.
These ratings let me conclude that most participants found the overall design, color
scheme, and duration of the tutorials satisfactory.
Other items on the design questionnaire targeted specific features of the
production (voice, pace of speaking, clarity of text, etc.). As can be seen from Table 9,
most respondents found that that the speakers’ voice and the clarity of the text on the
slides were at a comfortable level. A slightly smaller majority of participants responded
that the instructor’s pace of speaking and the transitions between the slides were
comfortable. In both categories, some participants indicated that the pace of speaking and
slide transitions was slower than they preferred. Similarly, while about 70% of the
participants found that the time available for practice was enough, some participants
found that it was either too long or too short.
Table 8. Ratings of the quality of tutorials based on Likert scale items
VC tutorials
SPV tutorials
All
N=73
ST
(n=25)
AN
(n=25)
RT
(n=23)
All
N=60
ST
(n=17)
AN
(n=22)
RT
(n=21)
I liked the design of
this program.
4.35
4.08
4.72
4.26
4.09
4.13
4.19
3.95
I liked the colors
used in this program.
4.23
4.32
4.8
3.57
4.03
4.38
4.05
3.65
The duration of the
presentation was
enough to review the
rules.
4.57
4.56
4.68
4.48
4.38
4.56
4.48
4.1
106
Overall, the analysis of the responses to the design questionnaire let me conclude
that the participants were overall satisfied with the design and the production
characteristics of the tutorials. The only aspect that displayed disagreement in the ratings
was the pace of the tutorial. This issue came up during the prepiloting of the tutorials,
when some found the pace slow and others found it to be at a comfortable level. I made
the decision to keep the slow pace of the tutorials because even those respondents who
found it to be too slow suggested that they would not recommend changing it.
Table 9. Ratings of the quality of tutorial design based on multiple-choice items
VC tutorials
SPV tutorials
Group
ST
AN
RT
Total
n
Total
%
ST
AN
RT
Total
n
Total
%
Respondents
25
25
23
73
100%
17
22
21
60
100%
Voice
Comfortable
Too loud
Too quiet
23
0
2
24
0
1
23
0
0
70
0
3
96%
0%
4%
17
0
0
22
0
0
21
0
0
60
0
0
100%
0
0
Pace
Comfortable
Too slow
Too fast
14
9
2
21
4
0
16
6
1
51
19
3
70%
26%
4%
16
1
0
20
2
0
18
2
1
54
5
1
90%
8%
2%
Transitions
Comfortable
Too slow
Too fast
14
9
2
17
7
1
13
9
1
44
25
4
60%
34%
6%
12
2
3
14
6
2
15
5
1
41
13
6
68%
22%
10%
Text
Comfortable
Hard to read
23
2
25
0
22
1
70
3
96%
4%
17
0
22
0
20
1
59
1
98%
2%
17
4
4
16
1
6
48
7
17
67%
10%
23%
12
4
1
17
3
2
18
3
0
47
10
3
78%
17%
5%
Time for practice
Sufficient
Too short
Too long
16
2
7
107
nstruments to measuree learning ou
utcomes
In
m
learn
ning outcomees, I created a pre- and a post-test foor each target
To measure
sttructure. Thee posttests in
ncluded item
ms from the ppretests, but these items w
were presennted
in
n a different order. The tests
t
included
d items that tested the knnowledge off the rules annd
su
ub-rules for each target structure.
s
Alll tests can bbe found in A
Appendices F (pre- and ppostteest for VC) and
a G (pre- and
a post-testt for SPV).
Test
T format
Due to
o the nature of the gramm
mar topics, I selected diffferent form
mats for the teest
ittems. Thus, for
f the regullar verb conjugation the tests followeed a fill-in–tthe–blanks
fo
ormat with th
he target verrbs included in parenthesses right afteer the blank (Figure 4). T
This
fo
ormat seemeed sufficient for the testin
ng purposes for this targget structure because the
ru
ules are appllied to the fo
orm of the veerbs, not theiir position inn the sentencce. For the
seeparable-preefix verbs, ho
owever, I sellected constrruct–a–senteence format with the words
prrovided out of order (Fig
gure 5). Thiss format seem
med to suit bbetter because the main goal
of the testing was to find out whetherr learners couuld apply thee rules abouut the position of
his type of veerbs in senteences.
th
Figure
F
4. Exa
ample of tesst items on the
t VC test
108
Figure
F
5. Exa
ample of the test items on SPV tesst
Timing
T
of thee test
The tiime allocated
d for learnerrs to fill out eeach test waas limited to ten minutes.. I
co
onsidered ten minutes to
o be sufficien
nt time for c ompleting thhe tests baseed on the pilooting
of testing matterials in sprring 2010 (Pilot II). Thenn, the focus of the pilot w
was not onlyy to
teest the items but also to investigate
i
th
he average ttime necessaary to compleete each testt.
During
D
the piiloting of maaterials, I did
d not inform the participaants about anny time
co
onstraints on
n completing
g the tests; however,
h
I assked them too indicate thee time for
sttarting and finishing
f
the test. The av
verage time w
was around 8 minutes foor both targett
sttructures. Th
he detailed reesults are preesented in T
Table 10.
Table
T
10. Test completio
on times du
uring Pilot III
Verb Conjjugation
Pilot II
paarticipants
Separabble-Prefix Veerbs
n
min
nimum
max
ximum
meaan
n
m
minimum
maaximum
meean
23
4 min.
m
9 min.
m
8m
min.
29
4 min.
11 min.
8m
min.
109
Reliability of the items
To determine the reliability of each test, I ran statistical analyses of internal
consistency on all tests used in this study. The analysis of the reliability was performed
on all completed tests. The analysis was performed separately on the pretests and the
posttests for each target structure. As can be seen from the Table 11, all tests used in this
study had a sufficient level of reliability based on the value of Cronbach’s alpha.
Table 11. Results of reliability testing of the study tests
VC
Cronbach’s alpha
SPV
Pretest
(n=97)
Posttest
(n=103)
Pretest
(n=97)
Posttest
(n=102)
.879
.903
.767
.771
Instruments for collecting other data
In this study, my first goal was to obtain a quantifiable comparison of learning
outcomes after reviewing two grammar topics with the help of the three different
computer-based grammar tutorials. The second goal was to measure the level of learners’
satisfaction from working with different modes of the tutorials (static, animated, and
recording of a real teacher). For this purpose, I developed two main instruments: a design
questionnaire to collect participants’ perception of the quality of the tutorial in terms of
production (audio, video, font, colors, etc.) and a satisfaction questionnaire to measure
the learners’ overall satisfaction from working with these tutorials to improve their
knowledge. Further, I developed a script for the optional interviews with participants,
110
who volunteered to talk about their experiences in this study. Finally, I created a
demographic questionnaire to collect background information. All instruments for
collecting data other than learning outcomes are described in detail below.
Design questionnaires
There were several reasons for including the design questionnaires (Appendix H)
into the study. First of all, because I developed all instruments myself specifically for this
study, it was important to collect participants’ perceptions of the quality of the design and
production of the tutorial. My other goal was to use these questionnaires as a short buffer
between the moment when the participants finished working with the tutorials and the
moment when they started completing the posttest. Because the last couple of minutes of
the tutorials presented a summary of all rules pertaining to the target structures, I decided
to prevent the immediate recall of these rules by including the design questionnaire as a
buffer.
There were two versions of the design questionnaire: one for the VC tutorials and
another one for SPV tutorials. The titles on the questionnaires were different, whereas the
format and the questions were absolutely identical on both versions. The questions
targeted participants’ perception of the quality of the following design aspects: overall
design, overall duration, colors, speaker’s voice, pace of speaking, transitions between
the slides, text on the slides, and the time allocated for the practice exercises.
Further, it should be mentioned that not all participants filled out both
questionnaires for the VC tutorial and the SPV tutorial. Because the time for the
experiment was limited, I used both the design questionnaires and the demographic
questionnaires as a buffer between the tutorial and the posttest. Thus, during the first
treatment (with VC tutorials) at MU, half of the participants received a design
questionnaire for the VC tutorials and the other half received the demographic
111
questionnaire. Then, during the second treatment (with SPV tutorials), the procedure was
reversed. Those who had responded to the design questionnaire for the VC tutorials
during the previous session filled out the demographic questionnaire; those who had
completed the demographic questionnaire during the previous session filled out the
design questionnaire for the SPV tutorials. At the other data collection site (AMU),
however, the time available for the study activities allowed all students to complete the
design questionnaires. See Table 12 for more details.
Table 12. Number of respondents to the design questionnaire
VC design questionnaire
All
Participants who filled
participants out the questionnaire
SPV design questionnaire
All
participants
Participants who filled
out the questionnaire
MU
75
43 (57%)
75
31 (41%)
AMU
30
30 (100%)
29
29 (100%)
Total
105
73 (69%)
104
60 (58%)
Satisfaction questionnaire
The satisfaction questionnaires in the study included questions to collect data on
participants’ perceptions of the usefulness of the tutorials and satisfaction from working
with them. These data were later used both for the quantitative and qualitative analysis
for research question 2. The questionnaire included items in various formats, such as the
five-point Likert–scale items (for responses that required the participants to rate quality
or indicate their agreement with a statement) and open–ended items (for the participants
to answer a question or provide comments about their responses).
112
There were two versions of the satisfaction questionnaire: one for Experiment 1
with VC tutorials (see Appendix I) and the other one for Experiment 2 with SPV tutorials
(see Appendix J). There were three sets of questions in each version: introductory
questions, questions about satisfaction with the tutorials, and additional questions. The
first set of questions was identical in both versions. Thus, each satisfaction questionnaire
started with a set of introductory questions that prompted the participants to indicate their
study nickname, what kind of the tutorial they worked with, and whether they paid
careful attention to that tutorial. The responses to these questions mainly served as proof
that learners who were assigned to a particular group through randomization in fact
worked with the appropriate tutorial.
The introductory questions were followed by a set of questions that targeted the
learners’ satisfaction with the tutorials. The items prompted the participants to provide
their opinions on the quality of a tutorial as a learning tool, describe its advantages and
disadvantages, indicate their knowledge of the target structures before the study, and
evaluate their perceived improvement. Because the three different modes of presentation
(ST, AN, and RT) were the focus of this study, those questionnaires that were used for
the evaluation of tutorials with animations or a real teacher included questions that
targeted learners’ satisfaction with these particular features.
The additional questions concerned the main difference between the two versions
of the questionnaires. Thus, in Experiment 1 (VC), the additional questions targeted the
participants’ overall attitude about computer-based grammar tutorials of this type and the
perceived usefulness of such tutorials for their language learning. Then, in Experiment 2
(SPV), the additional questions included items that prompted the participants to compare
their satisfaction from the two different modes of the tutorial because by that time the
participants had experienced two different modes of tutorials (e.g., video for VC and
static for SPV; or static for VC and animated for SPV).
113
To produce the satisfaction questionnaires and to deliver them to the participants,
I used an online survey engine supported by the university.
Demographic questionnaire
The demographic questionnaire (Appendix K) inquired about the participants’
age, gender, native language, student status, motivation for taking this German course,
other foreign languages previously studied, previous exposure to the German language
(from other classes, study abroad or German heritage), and their overall academic
achievement (range of ACT and GPA scores). These data were used to establish the
quality and homogeneity of the study sample.
Post-study interviews
The interviews in this study were an optional study procedure in addition to the
main study procedures. By including the interviews in the study, I aimed at examining the
learners’ perspective in more detail in addition to their responses on the satisfaction
questionnaire. The participants could volunteer to be interviewed after the study for
monetary compensation ($10/interview). The interviews were based on the interview
outline (see Appendix L) that I followed during the conversations with the participants.
Nevertheless, I digressed from the script if the participants’ responses and perceptions
required clarification or expansion.
Study procedures
The treatment conditions in this study included instruction on the target structures
(VC and SPV) with the three experimental tutorials: a static text with a voice-over
114
narration (ST), an animated text with a voice-over narration (AN), and a video recording
of a real teacher (RT).
There were two experiments in this study: Experiment 1, when the treatment
included the three VC tutorials, and Experiment 2, when the treatment included the three
SPV tutorials. Each experiment was set up in exactly the same order. First, the
participants completed a pretest about ten days prior to the treatment; then, during the
treatment, the participants worked with the tutorials, completed a posttest, and filled out
several questionnaires. The following sections provide details for the treatments and
experimental settings.
Randomization
For Experiment 1, I randomly assigned the participants from each section to the
three possible groups:

Group ST worked with the tutorial with the static text,

Group AN worked with the tutorial with the animated text,

Group RT worked with the tutorial with the recording of a real teacher.
For Experiment 2, I employed a different strategy for randomization to enable
comparisons with regard to learners’ satisfaction from working with different tutorials.
With a new randomization scheme I assured that the participants worked with different
tutorial types in Experiment 1 and Experiment 2. For that reason, I randomized the
groups that were formed as a result of the randomization in Experiment 1. I randomly
assigned the participants from Group ST to either Group AN or Group RT; I randomly
assigned the participants from Group AN to either Group ST or Group RT; and, finally, I
randomly assigned the participants from Group RT to either Group ST or Group AN (see
Figure 6).
115
Figure 6. Randomization scheme
This randomizatio
r
on scheme reesulted in sixx different coombinationss:

ST–A
AN: the particcipants who worked withh the static m
mode duringg Experimentt 1
and with
w the anim
mated mode during
d
Experriment 2.

ST–R
RT: the particcipants who worked
w
withh the static m
mode during Experimentt 1
and with
w the recorrding of a real teacher duuring Experiiment 2.

AN–S
ST: the particcipants who worked withh the animatted mode duuring Experim
ment
1 and with the stattic mode durring Experim
ment 2.

AN–R
RT: the participants who worked witth the animatted mode duuring Experim
ment
1 and with the reccording of a real
r teacher during Expeeriment 2.

RT–A
AN: the participants who worked witth the recordding of a reall teacher mode
during
g Experimen
nt 1 and with
h the animateed mode durring Experim
ment 2.

RT–ST: the particcipants who worked
w
withh the recordiing of a real teacher modde
during
g Experimen
nt 1 and with
h the static m
mode during Experiment 2.
116
The settings for the study procedures
All study procedures, except the optional interviews, took place during the regular
meeting time for all participating course sections. Below, I describe the settings in which
study procedures took place.
Participants at the main site
For the pretests, the students were in their regular classrooms. However, for the
treatments, these participants worked on individual computers in a computer laboratory
located in the building where the language departments are housed; it is also the same
building in which the participants’ regular classrooms are located. During academic
semesters, this lab is a regular computer laboratory for university students. Instructors can
reserve computers in this lab for a class or an experiment; however, unreserved
computers can be used by other students. This means that during the experiments for this
study, the study participants at the main site shared the lab with students who were not
part of the experiment. For this study, I reserved 32 computers (24 PCs, 8 Macs), which
were arranged in four rows with eight computers in each row (3 rows with PCs, 1 row
with Macs). With the seating arranged this way, the study participants stayed together as
a group without sharing close proximity to other lab users. All study participants worked
on the PCs and those students who decided not to take part in the experiments worked
separately on Macs. The instructors stayed in the lab for the whole period of the
experiment.
The final study procedure, interviews about the study, was optional. Only these
participants were interviewed because this university was the main site for the
experiment. I conducted all 17 interviews in individual-study rooms. Each interview was
a one–on–one audio-recorded conversation.
117
Participants at the additional site
The participants at AMU had all main study procedures, including the pretesting,
in a computer lab because it was the assigned classroom for their German course in fall
2010. There were six rows of PC computers with a total number of 27 machines. All
participants worked on individual computers.
Description of the study procedures
Below, I provide a detailed description of the study procedures in chronological
order. There was a total of five meeting days for this study for both experiments, and
below they are denoted Day 1 through Day 5. In real time, however, the study procedures
spanned over three weeks due to the multi-day gaps between the days on which the study
was carried out. The time gaps are presented in Figure 7.
The procedures were identical for all sections within and across institutions.
Before each study procedure, I reminded the participants that their participation was
voluntary and that it would not have any effect on their grade in their German course. I
also reminded the participants that they should only provide their nicknames and not their
real names for the study.
On Day 1 of the study, I asked the participants to fill out a pretest for the first
target structure—the regular verb conjugation (VC). I informed the participants about the
topic (VC), the format (fill–in–the–blanks) and the time for completion (10 minutes). On
that day, I asked the participants to create unique nicknames to protect their real
identities. I instructed that the nickname should not be their real name or their student ID.
On Day 2, the participants filled out a pretest for the second target structure—the
separable-prefix verbs (SPV). Again, I informed the participants about the topic (SPV),
the format (construct sentences from the words provided) and the time for completion (10
118
minutes). I had a list of all nicknames available on a separate sheet of paper in case some
participants forgot the nicknames they had created earlier to protect their real identities in
this study.
During Day 3, I met the participants in the computer lab. To assure that the
participants would be in the group according to the randomization scheme (described
earlier in this chapter, Figure 6), I prepared a poster board with three separate sheets. The
sheets were of different color and displayed the nicknames of the participants by group.
Thus, the list of nicknames for Group ST was printed on a green sheet, the list for Group
AN was printed on a blue sheet, and the list for Group RT was printed on a pink sheet.
The instructions prompted the participants to locate their nickname in one of the groups
and then find a computer marked with the corresponding color.
After all participants were seated, I asked them to log in to their respective
computers with their own login information. After that, they had several minutes to
connect headphones and select a comfortable level of volume. I then instructed the
participants to open the Internet Explorer browser on their PCs or Safari on Macs and go
to the URL address with further instructions (See Appendix M).
After all preparations had been completed, the participants watched the VC
tutorials. In the first two minutes of the tutorial, the participants were asked to pay careful
attention to the video, avoid using the video controls or taking notes. I included this
request to make sure that all participants spent the same amount of time on the tutorials
and could not consult any written notes later during the posttesting.
When the participants were finished with the tutorials, I distributed the
demographic questionnaire to a half of the MU participants and the design questionnaires
(VC) to the other half of the MU participants (See Table 12). As explained earlier, the
119
Figure
F
7. Tim
me gaps between study proceduress
mixed
m
distribution of the questionnairres was donee due to the limitations iin the class ttime
th
hat was availlable for the study proceedures at MU
U. However, at IMU therre were feweer
tiime constrain
nts and all participants had
h time to fi
fill out both tthe demograaphic and thee
design questio
onnaire.
120
After the participants filled out the questionnaires, they had 10 minutes to
complete the VC posttest. After that, they had 10 minutes to complete the online
satisfaction questionnaire about the VC tutorial.
At the end of Day 3 procedures, I thanked all participants for their time and help.
After that, the regular course instructors gave them a language task to complete on the
computer for the remaining class time.
The study procedures on Day 4 were almost identical to those on Day 3, except
that the participants focused on the second target structure, namely the separable-prefix
verbs. Another difference was the reverse distribution of the demographic and design
questionnaires for the MU participants. Those participants who had filled out the
demographics questionnaires on Day 3 completed the design questionnaires. Those who
had completed the design questionnaires on Day 3 filled out the demographic
questionnaire on Day 4. This way, all MU participants filled out the demographic
questionnaires and half of the participants filled out design questionnaires for the VC
tutorials and the other half for the SPV tutorials.
Finally, after the participants submitted the online questionnaire, they were
redirected to a web page that included sign-up information for the optional interviews
after this phase of the study.
Due to scheduling constraints, the Day 5 interviews took part on four consecutive
days. The participants, who signed up for the interviews, received two emails with
reminders about the time and the place of the interview. Before the interview, I asked
each participant to fill out the subject reimbursement form ($10/interview). After that, I
asked the participants for permission to use the audio recorder. All 17 participants agreed
to be audio-recorded. The interview proceeded based on the interview outline that was
discussed previously in this document. The interviews with each participant lasted from
15 to 30 minutes.
121
Participant’s time commitment
The total time commitment for the study was 100 minutes for the participants who
participated in all days of the study that were required. The time commitment of the
participants who volunteered to be interviewed after the treatment study procedures was
115–130 minutes (See Table 13).
Table 13. Time commitment for the study procedures
Time commitment
Day 1: VC pretest
10 minutes
Day 2: SPV pretest
10 minutes
Day 3: VC treatment
40 minutes
Breakdown of multiple activities





Day 4: SPV treatment
40 minutes





Day 5 (optional):
Interview
15–30 minutes
Total time for required
study procedures:
100 minutes
Total time for all study
procedures:
115–130 minutes
5 minutes for preparations and
introduction;
10 minutes for the VC tutorial;
5 minutes for either a demographic
or a VC design questionnaire;
10 minutes for the VC posttest;
10 minutes for the VC satisfaction
questionnaire.
5 minutes for preparations and
introduction,
10 minutes for the SPV tutorial,
5 minutes for either a demographic
or a SPV design questionnaire;
10 minutes for the SPV posttest;
10 minutes for the SPV satisfaction
questionnaire.
122
Assuring control over the variables
One of my goals when working on the methodology for this study was to ensure
sufficient control over the study procedures and variables to enable valid comparisons at
the analysis stage. This section summarizes the steps I undertook to ensure such control.
First of all, comparative studies in CALL are sometimes criticized for invalid
comparisons when the content of one instructional mode differs from the other ones in
more than one variable or when the duration of instruction is different (Chapelle, 2001;
Levy & Stockwell, 2006; Pederson, 1987). To avoid such criticism, I made sure that all
tutorials for the same target structure (VC or SPV) presented the identical rule
explanations, included the same examples and practice items, employed identical cueing
techniques (colors, bolding, highlighting, etc.), and lasted the same amount of time.
Next, the studies that compare animated and static modes of multimedia
instruction are sometimes criticized for favoring the animated modes when they include
more information than the static ones or when this information is presented in a superior
visual mode (Tversky et al., 2002). To avoid such criticism, the animated modes of the
tutorials in my study involved the animation of only the text and did not include any
additional visual elements.
Finally, before the participants watched the tutorials for the study treatments, they
listened to the message that asked them to pay careful attention to the information in the
tutorials and avoid using video control. This message also informed the participants that
they were not allowed to take notes during the study. The precaution of not using video
controls was based on the results of Pilot I, in which some participants used the controls
to skim the materials rather than carefully read the information on the slides. This
consideration was also based on Ginns (2005), who argued that system-paced
presentations are better for learning than self-paced presentations. The rationale for not
allowing any note-taking during the study was based on Corbeil (2007), who suggested
123
that although this activity helps learners retain information better, it also causes learners
to get distracted from the instruction in a paced multimedia presentation. Because it was
important for my study that learners pay careful attention to the tutorials and spend the
same time working with them, I decided that excluding note-taking would be beneficial
for the validity of the results.
Working with the collected data
There were three types of instruments for data collection in this study: tests,
questionnaires, and interviews. This section describes how data from each instrument
were processed after the study procedures were completed. All collected data used in the
analyses are available online at myweb.uiowa.edu/akolesni/dissertation_data.html
Test results
There was a total of 407 tests: 99 pretests and 105 posttests for Experiment 1
(VC), and 99 pretests and 104 posttests for Experiment 2 (SPV). I processed all tests to
obtain the following data: test scores for each participant, ratings of test difficulty, and
the number of participants who indicated that they reviewed the topics between the preand the posttests and explained how they reviewed them.
Scoring the tests
I scored all tests three times. The first time, I scored all tests for two purposes:
first, to create charts with the preliminary results; and second, to finalize scoring criteria.
The first scoring was based on the number of correct answers out of the possible number
of points for each test (e.g., 23/37). For this stage of scoring, I scored all tests, no matter
whether they were included into the final analysis or not, to make sure that I had a
124
complete understanding of the types of mistakes to finalize scoring criteria. The second
scoring was similar to the first one but it was based on the final scoring criteria for each
target structure (see below or Appendix N). Again, the notation of the scoring was based
on the number of correct responses. The third scoring double-checked the results of the
first two rounds. For this scoring I created charts for each item on each test because such
a format was necessary for the reliability testing of the tests (Cronbach’s alpha). The
results of this type of scoring were compared to the results of the second scoring and all
inconsistencies were resolved.
Second rater
To assure the reliability and consistency of my scoring, I asked a second rater to
score about 20% of the tests (36 for VC, 36 for SPV) based on the provided scoring
criteria (Appendix N). The second rater was a female Ph.D. student who was a native
speaker of German and at the time of the study was employed as a teaching assistant in
German. She was unfamiliar with my study prior to agreeing to become the second rater.
After she scored the tests, we met to compare the results. I prepared a comparison table
with the scores only for the portion of the tests that the second rater scored. For the tests
on the regular verb conjugation, there were seven instances of different scores (1–4 point
differences). All differences resulted from the second grader miscalculating the total
score, missing a mistake, or marking a correct answer as incorrect. All differences were
resolved in my favor. For the tests on the separable-prefix verbs, there were also seven
instances of different scores (1–3 point difference). Here, three instances resulted from
the second rater missing an error. The other three errors were mine: one by missing an
error and two from copying a incorrect score into the comparison table (the scores in the
main scoring charts were the same as the second rater’s scores). The final error was a
subject of discussion that was resolved in favor of the second rater. However, this error
125
did not affect my scoring because an error of this kind has occurred only in this particular
test. Overall, the results of the comparison of the scoring between the second rater and
me demonstrated that my scoring was sufficiently consistent and reliable.
Scoring criteria
All tests were scored according to a set of scoring criteria for each target structure
(see Appendix N). The scoring criteria for the tests in Experiment 1 were straightforward.
If the learner applied the rule correctly, one point was awarded, even if a part of the word
(other than the ending) was misspelled. For instance, both du lernst (correct) and du
learnst (correct, but misspelled) were counted as correct.
There were two scoring schemes for the tests in Experiment 2. First, all sentences
written by the participants were evaluated on the basis of whether both the main rule and
the sub-rules for the separable-prefix verbs were applied correctly. According to this
correct/incorrect scoring scheme, one point was awarded even if the sentence contained
other types of mistakes such as incorrect case or incorrect conjugated form. This scoring
scheme evaluated the complete knowledge of the rules. However, because the learners
were only beginners, I decided to apply an additional partial scoring scheme to the
evaluation of the sentences to acknowledge the fact that the learners might have acquired
the main rule, but not the sub-rules, or only some sub-rules. Thus, the scoring was based
on partial scores for each instance of the rule. The details and examples of the partial
scoring scheme can be found in Appendix N.
Questionnaire data
First, the data from the questionnaires were entered into Excel spreadsheets. After
that, I assigned numerical values to all multiple-choice responses (e.g., all “Strongly
disagree” responses received a value of 1, whereas all “Strongly agree” responses
126
received a value of 5). Then, I entered all the open-ended responses onto a separate
spreadsheet to create the corpus of qualitative data for the study. During the several days
between the end of the data collection and the first post-study interview, I examined
questionnaire responses to pinpoint factors that stood out with regard to their influence on
the participants’ satisfaction ratings. Preliminary categories included several factors that
influenced satisfaction ratings for the study tutorials in general and for the three tutorial
modes in particular. Moreover, several controversial issues came to the forefront, and I
included them in the participant interviews. First, there was a clear difference of opinion
on whether the slow pace of the tutorials was an advantage or a disadvantage. Second, the
tutorial with the teacher recording produced polarized responses on the value of this type
of enhancement. Finally, learners’ conditions for the use of tutorials resulted in different
opinions. These and other issues are presented in Chapter 4.
Interview data
There were 17 participants who volunteered to be interviewed after the tutorial
phase of this study. The information about these participants is presented in Table 14.
After the interviews were completed, I transcribed all 17 interviews. Then, I read the
interviews and the open-ended questionnaire responses to elaborate preliminary
categories for the qualitative analysis. After that, I read the data again, marking the
passages according to a category. Based on that, I created a summary of categories and
trends in the data. I worked with the summaries to elaborate a systematic view on the data
and create an outline of the narrative. After the outline was finished, I re-read the data in
a search for quotations from the participants that would best describe the findings in each
category. I selected more detailed quotations from both the questionnaires and the
interviews to illustrate the main trends in the participants’ responses.
127
Table 14. Information about interview participants
Participant
Gender
Native
Language
VC tutorial
mode
SPV tutorial
mode
Preference
Participant 1
M
English
ST
AN
AN
Participant 2
M
English
AN
RT
AN
Participant 3
M
English
ST
n/a
n/a
Participant 4
M
English
ST
RT
ST
Participant 5
F
Other
AN
RT
RT
Participant 6
M
English
AN
ST
AN
Participant 7
M
English
ST
AN
AN
Participant 8
M
Other
n/a
ST
n/a
Participant 9
M
Other
AN
RT
RT
Participant 10
M
Other
RT
ST
RT
Participant 11
M
Other
AN
ST
AN
Participant 12
F
English
AN
RT
AN
Participant 13
M
Other
AN
RT
AN
Participant 14
M
English
RT
AN
RT
Participant 15
F
English
RT
AN
AN
Participant 16
M
English
ST
AN
No preference
Participant 17
M
English
ST
AN
AN
Summary
This chapter provided details about the methodology of this study, including
participants characteristics, study instruments, and study procedures. This study was
based on two parallel experiments with identical study procedures and study instruments,
but different target structures. In this chapter, I also described how I ensured the quality
of the instruments and the study procedures to maximize my control over the study
variables. The next chapter presents the analysis for the study data.
128
CHAPTER 4
ANALYSIS
This chapter reports on the results of the current study that examined effects of
three different types of computer-based grammar tutorials on learning outcomes and
ratings of learners’ satisfaction from working with these tutorials. This study consisted of
two parallel experiments with identical study procedures. The study treatments and
materials focused on regular verb conjugation in German in Experiment 1 (VC) and
separable-prefix verbs in Experiment 2 (SPV).
This chapter is organized in terms of the two research questions presented in
Chapter 1. The data for each research question is analyzed by experiment; thus, the
section on the results for the first research question includes the discussion of both
Experiment 1 and Experiment 2. The analysis of the data for the second research question
includes the discussion of the results from these experiments separately and across the
two experiments. The results for research question 1 are based on quantitative analysis,
whereas the results for research question 2 draw on analyses of both quantitative and
qualitative data.
Research question 1: Effects on learners’ knowledge
The first research question for this study focused on the effects of three different
modes of tutorials on the students’ learning outcomes of this study: “Do the three modes
of computer-based grammar tutorials (ST, AN, and RT) produce different effects on
learners’ knowledge of the target structures (regular verb conjugation and separableprefix verbs in German)?” To investigate this research question empirically, I looked at
the effect on knowledge across groups (ST, AN, and RT) from several different
perspectives, which I defined in terms of: a) the actual test scores on the pre- and post-
129
test; b) the perceived difficulty of the tests; and c) the self-reported perceived knowledge
improvement.
Exclusion criteria
As explained in Chapters 1 and 3, there were no exclusion criteria for
participation in this study. However, before the data for this research question were
analyzed, I included two exclusion criteria with the goal of obtaining reliable data entries:

Only the data from the participants who were present for both the pretest and the
posttest were included in the analysis to enable the comparison.

Only the data from the participants who indicated at least partial agreement with
the statement that they watched the tutorials carefully paying attention to the
information were included in the analysis. The selection of participants whose
data were removed based on this criterion was based on the participants’
responses to a question on the satisfaction questionnaires. This question asked the
respondents to indicate their agreement with the statement “I watched this tutorial
carefully paying attention to the information” on the 5-point Likert scale. The data
from those participants who selected “Disagree” or “Strongly disagree” were not
included in the analysis for research question 1.
The application of the exclusion criteria resulted in removing 24 data entries from
Experiment 1 and 17 data entries from Experiment 2. Most of the removed cases were
deleted based on the first exclusion criterion when some learners were present for the
pretest but not for the posttest or vice versa. The final numbers for the participants in
Experiment 1(VC) and Experiment 2 (SPV) are presented in Table 15 below.
Due to the fact that many students missed either the day of the pretest or the day
of the posttest, I ran a parallel statistical analysis to examine whether the results would
have been different if these students had been present. Because the missing data
130
comprised less than 15% of the data, I used a transformation procedure in the SPSS
statistical software to recreate missed values based on the series mean. The results of this
analysis are briefly mentioned in the sections below. Tables with the detailed results are
presented in Appendix P.
Table 15. Number of participants included in the analysis for RQ 1
VC
SPV
ST
AN
RT
ST
AN
RT
Pretest
29
29
30
28
33
30
Posttest
29
29
30
28
33
30
Data
The data for examining the effect on learning outcomes in terms of the actual
scores the participants received on the pre- and post-test. As explained in Chapter 3, the
posttest included the same items as the pretest but in a different order.
The data for examining the effect on the ratings of the test difficulty come from
the participants’ responses to the question at the end of the pre- and post-test that asked
them to rate the difficulty of this test on the 5-point Likert scale from “Very easy” (1) to
“Very difficult” (5).
The data for the analysis of self-perceived knowledge improvement comes from
the question on the satisfaction questionnaire that asked the participants to indicate their
agreement with the statement “My knowledge of this grammatical topic improved after
working with this tutorial” on the 5-point Likert scale from “Strongly disagree” (1) to
“Strongly agree” (5).
131
Analysis
The analysis of the data for RQ1 proceeded in two stages. During the preliminary
stage, I applied the analysis of variance (ANOVA) to examine whether at the onset of the
study the groups had significant differences in the pretest scores and in their ratings of
test difficulty.
For the main analysis, to examine whether the three different computer-based
grammar tutorials (ST, AN, and RT) had a different effect on learning outcomes and
ratings of test difficulty, a repeated-measures block–design ANOVA (mixed methods)
was administered to test the within-subject effect for Test (improvement from the pretest
to the posttest) and the between-subject effect for Tutorial type (differences among the
three groups), and the interaction of Test and Tutorial type.
For the analyses of the self-reported ratings of knowledge improvement, I used
one-way ANOVA. The significance level for all tests was set at the .05 level. If
necessary, the Fisher LSD test was applied as a post-hoc procedure. Levene’s test of
homogeneity of variance was applied to all tests; the results of Levene’s test are reported
only if the variances were significantly different. The following sections present the
results of these analyses.
Missing values
It should be noted that the data for the ratings of test difficulty and for the
reported improvement of knowledge are based on a different number of participants
compared to the data for the pre- and post-test comparisons. This difference in numbers
occurred because some participants omitted to respond to the questions about difficulty
and/or knowledge improvement.
132
Experiment 1 (VC)
This section reports on the results of the statistical analysis of the data from
Experiment 1 (VC) to answer research question 1: Do the three modes of computer-based
grammar tutorials (ST, AN, and RT) produce different effects on learners’ knowledge of
the regular verb conjugation in German? Table 16 presents the descriptive statistics for
the data used in this analysis.
Table 16. Descriptive statistics for the RQ 1 data in Experiment 1 (VC)
ST
AN
RT
n
Mean
Std.
deviation
n
Mean
Std.
deviation
n
Mean
Std.
deviation
Pretest
29
19.79
7.40
29
18.72
6.69
30
16.63
6.76
Posttest
29
28.45
7.06
29
29.17
6.53
30
26.56
8.19
Test scores
Improvement from the pre- to the post-tests
Scores
29
8.65
6.35
29
10.45
5.64
30
9.93
4.73
Ratings of difficulty
Pretest
29
3.45
.95
27
3.24
.85
27
3.72
.65
Posttest
29
2.76
1.02
27
2.37
.88
27
2.70
.95
.56
26
4.08
.74
28
3.96
.88
Ratings of improvement
Posttest
29
4.10
Preliminary work with the data
The purpose of the preliminary analysis was to examine whether at the onset of
the study the three VC groups had significant differences in their pretest scores and in
133
their ratings of test difficulty. The preliminary analysis of the data was conducted on the
VC pretest scores and the ratings of the difficulty of the VC pretests from the three
different groups (ST, AN, and RT). For the pretest scores, the one-way ANOVA, F (2,
85) =1.581, p = .212, demonstrated no statistically significant differences among the
three groups. Similarly, the one-way ANOVA, F (2, 84) =2.990, p = .056, demonstrated
no significant differences with regard to the ratings of the test difficulty. I concluded that
the groups were comparable in terms of their knowledge of the target structure at the
onset of the study. The means and standard deviations are presented in Table 16.
Results for pretest–posttest comparison
The results of the mixed methods repeated-measures two-way ANOVA
demonstrated that all groups significantly improved their test scores over time (F (1, 85)
= 262.226, p = .000), but there was no significant effect of the Tutorial type (F (2, 85) =
1.367, p = .260) or the interaction between the variable of Test and the Tutorial type (F
(2, 85) = .787, p = .458). The results are visually represented in Figure 8. From this
result, I concluded that all groups improved significantly from the pre- to the post-test,
but the tutorial type did not have a differential effect on the improvement. The means and
standard deviations are presented in Table 16. The output for this statistical analysis is
presented in Tables O1 and O2 (Appendix O). The parallel analysis with the missing
values restored demonstrated a similar result (Tables P1, P2, and P3 in Appendix P).
Results for ratings of test difficulty
The mixed methods ANOVA on the ratings of test difficulty on the pretest
compared to the posttest for the three tutorial modes demonstrated the effect for the
within-subject variable of Test (F (1, 80) = 68.776, p = .000), but not for the betweensubject variable of Tutorial type (F (1, 80) = .933, p = .132) or the interaction between
134
Test
T and Tuto
orial type (F
F (2, 80) = .85
54, p = .430 ). As can bee seen from Figure 9, thee
participants raated pretestss as significaantly more ddifficult comppared to the posttests,
i
were identical.
i
Ass with the pree- to post-tesst improvem
ment in scorees, the
allthough the items
tu
utorial type did
d not makee an impact on
o the changge in difficullty ratings (F
Figure 9). Thhe
means
m
and staandard deviaations are preesented in T
Table 16. Thee output for this statisticcal
an
nalysis is preesented in Tables
T
O3 and O4 (Appenndix O). Thee parallel annalysis with tthe
missing
m
valuees restored demonstrated
d
d a similar reesult (Tabless P4, P5, andd P6 in Appeendix
P).
Figure
F
8. Ressults for preetest–postteest comparisson (VC)
135
Figure
F
9. Ressults for rattings of test difficulty (V
VC)
Results
R
for self-perceiveed knowledg
ge improvem
ment
The one-way ANO
OVA (F (2, 80) = .282, p = .755) wiith regard to the ratings oof
th
he self-perceeived knowleedge improv
vement demoonstrated no statistically significant
differences beetween the th
hree groups.. The mean rrating of the improvemeent across thee
grroups was M = 4.05 (outt of 5), whicch let me connclude that thhe participannts in all grooups
siimilarly felt that their kn
nowledge of the regular vverb conjugaation in Germ
man improvved
affter working
g with the tuttorials (Figu
ure 10). The m
means and sstandard devviations are
prresented in Table
T
16.
136
Figure 10. Results for ratings of self-perceived knowledge improvement (VC)
Summary of the results for RQ1 in Experiment 1
The analysis of the data for research question 1 “Do the three modes of computerbased grammar tutorials (ST, AN, and RT) produce different effects on learners’
knowledge?” with regard to the regular verb conjugation demonstrated a significant
effect for instruction when all participants improved their test scores over time and found
the tests significantly easier after working with the tutorials. No significant differences
were found for the effect of each particular tutorial or the interaction between the tutorial
type and improvement over time.
137
Experiment 2 (SPV)
This section reports on the results of the statistical analysis of the data from
Experiment 2 (SPV) to answer research question 1: Do the three modes of computerbased grammar tutorials (ST, AN, and RT) produce different effects on learners’
knowledge of the separable-prefix verbs in German? Table 17 represents the descriptive
statistics for the data used in this analysis.
Preliminary work with the data
The purpose of the preliminary analysis was to examine whether at the onset of
the study the three SPV groups had significant differences in the pretest scores and in
their ratings of test difficulty. The preliminary analysis was administered on the data from
the three different groups (ST, AN, and RT). For the pretest scores, the one-way ANOVA
(F (2, 88) = 1.652, p = .198) on the correct/incorrect scoring and the one-way ANOVA
(F (2, 88) = 1.534, p = .221) on the partial scoring, demonstrated no statistically
significant differences among the three groups. Similarly, the one-way ANOVA, (F (2,
86) = 1.418, p = .248) demonstrated no significant differences with regard to the ratings
of test difficulty. The means and standard deviations are presented in Table 17.
Results for pretest–posttest comparison
The mixed methods ANOVA demonstrated that all groups significantly improved their
test scores over time (F (1, 88) = 169.537, p = .000), but there was no difference in the
effect of the Tutorial type (F (2, 88) = 2.524, p = .086), or the interaction between Test
and Tutorial type (F (2, 88) = .660, p = .520). It should be noted that Levene’s test of
homogeneity of variance yielded a significant p-value for the posttests scores (p = .025).
According to Field (2009), Levene’s test is not necessarily the best way to check for the
138
homogeneity of variance and, if significant values are obtained, the results can be doublechecked using the critical values on Hartley’s FMax test (pp. 149–152). In line with Field’s
suggestions, the values of standard deviations for the posttest were squared and the
highest value was divided by the smallest. The obtained value of 2.89 was smaller than
the critical value of 3.00 (for 30 people and 3 variances), which suggests that the
differences in variance did not have a strong negative effect on the results of the mixedmethods ANOVA. From this result, I concluded that all groups improved significantly
from the pre- to the post-test (Figure 11), but the tutorial type did not have an effect on
the improvement. The means and standard deviations are presented in Table 17. The
output for this statistical analysis is presented in Tables O5 and O6 (Appendix O).
Table 17. Descriptive statistics for the data for RQ 1 in Experiment 2 (SPV)
ST
AN
RT
n
Mean
Std.
deviation
n
Mean
Std.
deviation
n
Mean
Std.
deviation
Pretest
28
7.04
1.97
33
6.58
2.26
30
6.00
2.26
Posttest
28
9.64
1.47
33
9.76
1.68
30
8.73
2.50
Test scores
Improvement from the pre- to the post-tests
Scores
28
2.61
2.13
33
3.18
1.65
30
2.73
2.42
Test scores (partial scoring)
Pretest
28
20.00
3.60
33
19.61
4.31
30
18.23
4.21
Posttest
28
24.34
2.34
33
24.30
2.65
30
22.97
3.74
Improvement from the pre- to the post-tests (partial scoring)
Scores
28
4.36
3.72
33
4.70
2.95
30
4.73
3.93
Ratings of test difficulty
Pretest
28
3.39
.87
33
3.72
.64
28
3.64
.87
Posttest
28
2.77
.90
33
2.88
.74
28
3.04
.96
32
3.94
.62
26
3.76
.68
Ratings of self-perceived improvement
Posttest
27
3.81
.68
139
Results
R
for pretest–post
p
ttest comparrison (partiaal
sccoring)
The analysis
a
of th
he data based
d on the parttial scoring oof the SPV ttests revealedd a
trrend similar to the one deemonstrated
d for the scorring based onn the correctt/incorrect
crriterion. Thu
us, the particcipants signifficantly imprroved their sscores from the pre- to thhe
posttest (F(1,88) = 152.89
93, p = .000)), but no signnificant diffferences were evident forr the
efffects of Tuttorial type (F
F(2, 88) = 2.268, p = .11 0), or the innteraction bettween Test aand
th
he Tutorial ty
ype (F(2, 88
8) = .100, p = .905). Sim
milar to regulaar scoring, L
Levene’s testt
Figure
F
11. Results
R
for prretest–postttest comparrison (SPV, correct/incoorrect scoriing)
140
nificant diffeerence for th
he posttest sccores (.05). A
Again, in linee with Field
yielded a sign
2009), a subsequent com
mparison of the result of tthe division of the higheest squared
(2
variance overr the smallesst squared vaariance (2.555) to the critiical value onn the Hartleyy
FMax test (3.00
0) revealed that
t the diffeerences betw
ween the variiances were acceptable. T
This
reesult allowed
d me to concclude that thee participantts significanttly improvedd their scorees
frrom the pre- to the post-ttest (see Figure 12). Thee means and standard deviations are
prresented in Table
T
17. Th
he output forr this statisticcal analysis is presentedd in Tables O
O7
an
nd O8 (Appeendix O). Th
he parallel an
nalysis with the missingg values restoored
demonstrated
d a similar reesult (Tables P7, P9, andd P9 in Appeendix P).
Figure
F
12. Result
R
for pretest–postteest compariison with paartial scorin
ng (SPV)
141
Results for ratings of test difficulty
As in Experiment 1 (VC), the data for this analysis comes from the ratings of test
difficulty that the participants reported for both the SPV pretest and the SPV posttest.
Similar to examining the effect on learning outcomes in terms of the SPV test scores, a
mixed-methods ANOVA on the ratings of test difficulty demonstrated that the ratings in
all groups significantly dropped over time (F (1, 86) = 58.335, p = .000), but no
significant effects for found for the Tutorial type (F (2, 86) = 1.106, p = .336) or the
interaction between thechange over time (Test) and the Tutorial type (F (2, 86) = .772, p
= .465). From this result, I concluded that all groups found the SPV posttest significantly
less difficult than the pretest, although the items were identical. As with the pre- to posttest improvement in scores, the tutorial type did not make an impact on the change. The
results are presented in Table 17 and Figure 13. The output for this statistical analysis is
presented in Tables O9 and O10 (Appendix O). The parallel analysis with the missing
values restored demonstrated a similar result (Tables P10, P11, and P12 in Appendix P).
Results for self-perceived knowledge improvement
The one-way ANOVA (F(2, 82) = .476, p = .623) demonstrated no statistically
significant differences among the three groups with regard to how the participants
perceived their knowledge improvement after working with the study tutorials . The
mean rating of the improvement across the groups was M = 3.85, which allowed me to
conclude that the participants in all groups somewhat agreed that their knowledge of the
separable-prefix verbs in German improved after working with the tutorials (Figure 14).
The means and standard deviations are presented in Table 17.
142
Figure
F
13. Results
R
for ra
atings of tesst difficulty (SPV)
5
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
ST‐SPV
AN‐SPV
V
RT‐SSPV
Figure
F
14. Results
R
for ra
atings of sellf-perceived
d knowledgee improvem
ment (SPV)
143
Summary
Similar to the results reported for Experiment 1 (VC), all groups significantly
improved their scores from the pre- to the post-test in Experiment 2 (SPV). No other
significant effects were found.
Summary for research question 1
The results of the statistical analyses for the effects of the three different types of
tutorials on the learners’ knowledge were similar in Experiment 1 (VC) and Experiment 2
(SPV). Thus, in both experiments the participants in all groups significantly improved
their test scores from the pre- to the post-tests and rated the tests as significantly easier on
the posttests, which suggests a strong effect for instruction. However, no significant
effects between the groups were found with regard to the improvement in scores and the
ratings of test difficulty from the pre- to the post-tests, which suggests that the mode of
the tutorial did not strongly influence students’ knowledge improvement or their
perceptions of the effectiveness of a given mode of instruction. The analysis of the
ratings of the perceived improvement of knowledge suggested that most students
believed that their knowledge improved, but, again, the tutorial type did not have a strong
effect on these ratings.
Research question 2: Effects on learners’ satisfaction
The second research question for this study focused on the participants’
satisfaction ratings for the tutorials: “Do learners report different satisfaction ratings
following the three modes of computer-based grammar tutorials under consideration in
this study? What factors or considerations influence the learners’ satisfaction ratings for
the tutorials?”
144
As for research question 1, the analysis of the data proceeded separately for
Experiment 1 (VC) and Experiment 2 (SPV). However, the analysis of the data for RQ2
proceeded both within and across the experiments. For the analysis within each
experiment, I looked at the ratings of participants’ satisfaction from working with the
tutorials from several perspectives: their overall satisfaction with the tutorials; their
opinions about the value of these tutorials as a learning tool; and the ratings of the
tutorials in terms of how helpful, entertaining, and engaging the participants considered
these tutorials to be.For the analysis across the experiments, I looked at the following
aspects: the general attitude towards computer-based grammar tutorials with regard to
their use, the consistency of ratings across the experiments for the same tutorial type, and
the reported preferences for a tutorial type at the end of the experiment.
Exclusion criteria
No exclusion criteria were applied to the data that were analyzed for RQ2. I found
it unnecessary to apply any exclusion criteria because only the participants who worked
with the tutorials filled out the satisfaction questionnaires. In addition, I was interested in
the satisfaction levels of all participants, including those who indicated that they did not
pay careful attention to the presentation. I decided that by excluding data from such
participants I would subsequently exclude the data that could provide insight as to why
these participants found it hard to concentrate on the tutorials.
Although in contrast to research question 1, where the exclusion criteria were
applied to the data, the analysis for research question 2 did not exclude any data, I ran an
additional analysis to examine whether the application of the exclusion criteria (those
from RQ1) would have produced a different effect of the results. The results of the
analysis with exclusion criteria applied (Appendix Q) demonstrate similar, but less
distinctive, trends compared to the analysis presented below.
145
Data
The data for the analysis of the RQ 2 within each experiment came from the
questions in the satisfaction questionnaire that asked the participants to provide ratings of
some aspect of the tutorials or indicate a level of their agreement with a provided
statement. All ratings were based on the 5–point Likert scale from “Strongly disagree”
(1) to “Strongly agree” (5).
Analysis
For the ratings of satisfaction in terms of overall satisfaction, helpfulness,
entertainment, and engagement value of the tutorials, as well as the value of the tutorials
as the learning tools, this analysis was based on a one-way analysis of variance
(ANOVA).
Across the experiments, the analysis of the participants’ general attitude toward
the issue of whether tutorials should be available to them and whether they would use
them was based on frequency counts and one-way ANOVA procedures. The analysis of
the consistency between the ratings of a tutorial mode was based on a series of
independent t-tests when a) the ratings for the static tutorial in Experiment 1 (VC) were
compared with the ratings for the static tutorial in Experiment 2 (SPV), b) the ratings for
the animated VC tutorial were compared to those for animated SPV tutorial, and c) the
ratings for the VC tutorial with a teacher recording were compared to the SPV tutorial in
the same format. Finally, the analysis of the reported preference of a tutorial type was
based on frequency counts and on a series of binomial tests.
146
Missing values
It should be noted that the number of participants in the analyses below varies
because some participants omitted to respond to some questions on the questionnaire.
Experiment 1 (VC)
This section reports on the results of the statistical analysis to answer the question
“Do learners report different satisfaction ratings following the three modes of computerbased grammar tutorials under consideration in this study?” with regard to the tutorials
used in Experiment 1 (VC). Table 18 presents the descriptive statistics for the data.
Rating of tutorial: overall satisfaction
The data for the overall satisfaction were drawn from the participants’ response to
the statement “I liked my overall experience working with this tutorial.” A one-way
ANOVA of the ratings of agreement (F(2, 97) = 1.353, p = .263) yielded no significant
differences between groups in regard to overall satisfaction from working with the VC
tutorials (Figure 15). The mean rating across the groups was M = 4, which suggests that
most participants indicated that they liked their overall experience. The means and
standard deviations are presented in Table 18. The additional analysis with the exclusion
criteria applied demonstrated similar results (Table Q1).
Rating of tutorial: helpful
The data for this aspect of learners’ satisfaction were drawn from the participants’
response to the statement “This tutorial was helpful.” A one-way ANOVA of the ratings
of how helpful the VC tutorials were (F(2, 98) = 2. 893, p = .060) indicated some
147
differences between groups (Figure 16). Post hoc comparisons using the Fisher LSD test
revealed that only the ratings for the animated tutorial (AN) were significantly higher (p
= .019) than the ratings for the tutorial with the recording of the real teacher (RT). The
mean rating across the groups was M = 4.22, which suggests that most participants
agreed that these tutorials were helpful. The means and standard deviations are presented
in Table 18. The additional analysis with the exclusion criteria applied did not indicate
any significant trends in the data (Table Q1).
Table 18. Data for the satisfaction ratings (VC)
Rating of tutorial:
overall satisfaction
Rating of tutorial:
helpful
Rating of tutorial:
entertaining
Rating of tutorial: engaging
Rating of tutorial: value as a
learning tool
N
Mean
Std. deviation
ANOVA
Sig.
ST
33
3.9394
.78817
.263
AN
33
4.1818
.76871
RT
34
3.8824
.80772
Total
100
4.0000
.79137
ST
33
4.2424
.50189
AN
33
4.3939
.49620
RT
35
4.0286
.82197
Total
101
4.2178
.64193
ST
33
2.5758
.90244
AN
33
3.3333
1.02062
RT
35
2.6286
.84316
Total
101
2.8416
.97706
ST
33
3.0303
.98377
AN
33
3.6667
.92421
RT
35
3.2000
.96406
Total
101
3.2970
.98534
ST
32
3.9688
.82244
AN
33
4.3030
.58549
RT
35
4.0000
.84017
Total
100
4.0900
.76667
.060
.001*
.023*
.148
148
Rating of tutorial: entertaining
The data for this aspect of learners’ satisfaction were drawn from the participants’
response to the statement “This tutorial was entertaining.” The mean rating across the
groups was M = 2.84, which suggests that most participants did not find these tutorials
particularly entertaining. However, a one-way ANOVA (F(2, 98) = 6.978, p = .001)
revealed significant differences between the groups with regard to the ratings of how
entertaining the tutorials were (Figure 17). Post hoc comparisons using the Fisher LSD
test revealed that the ratings for the animated tutorial (AN) were significantly higher (p =
.001) than the ratings for the static tutorial (ST) and significantly higher (p = .002) than
those for the tutorial with a recording of a real teacher (RT). The means and standard
deviations are presented in Table 18. The additional analysis with the exclusion criteria
applied demonstrated similar results (Table Q1).
Ratings of tutorial: engaging
The data for this aspect of learners’ satisfaction were drawn from the participants’
response to the statement “This tutorial was engaging.” The mean rating across the
groups was M = 3.29, which suggests that overall many participants did not find the
tutorials very engaging. However, a one-way ANOVA (F (2, 97) = 3.917, p = .023)
revealed significant differences between the groups with regard of the ratings of how
engaging the tutorials were (Figure 18). Post hoc comparisons using the Fisher LSD test
revealed that the ratings for the animated tutorial (AN) were significantly higher (p =
.008) than the ratings for the static tutorial (ST) and also significantly higher (p = .047)
than those for the tutorial with the recording of the real teacher (RT). The means and
standard deviations are presented in Table 18. In contrast, the additional analysis with the
exclusion criteria applied did not demonstrate any significant results (Table Q1).
149
Rating of tutorial: value as a learning tool
The data for this aspect of learners’ satisfaction were drawn from the participants’
ratings of the value of the tutorials as a learning tool in response to the statement “Please
rate the value of this tutorial as a learning tool.” The ratings were based on the five–point
Likert scale and ranged from “Very poor” (1) to “Very good” (5). A one-way ANOVA of
the ratings of the VC tutorials as learning tools (F(2, 97) = 1.952, p = .148) yielded no
significant differences between groups (Figure 19). The mean rating across the groups
was M = 4.09, which suggests that most participants agreed that these tutorials are
valuable learning tools. The means and standard deviations are presented in Table 18.
The additional analysis with the exclusion criteria applied demonstrated a similar result
(Table Q1).
Figure 15. Ratings of tutorial: overall satisfaction (VC)
150
Figure 16. Ratings of tutorial: helpful (VC)
Figure 17. Ratings of tutorial: entertaining (VC)
151
Figure 18. Ratings of tutorial: engaging (VC)
Figure 19. Ratings of tutorial: value as a learning tool (VC)
152
Summary
In short, the ratings of satisfaction were very similar across groups in Experiment
1 (VC). Thus, most participants across all three groups agreed that they liked their overall
experience working with these tutorials and that these tutorials had a value as learning
tools. On the ratings of how helpful, entertaining, and engaging the tutorials were, the
analysis demonstrated some differences. Thus, while overall the participants agreed that
the tutorials were helpful, the ratings for the animated tutorial were significantly higher
than the ratings for the tutorial with the recording of a real teacher. And although overall
the tutorials did not receive high ratings for the aspects of entertainment and engagement,
the animated group received significantly higher ratings in these categories than the other
two tutorials.
Experiment 2 (SPV)
This section reports on the results of the statistical analysis to answer the question
“Do learners report different satisfaction ratings following the three modes of computerbased grammar tutorials under consideration in this study?” with regard to the tutorials
used in Experiment 2 (SPV). Table 19 presents the descriptive statistics for the data.
Ratings of tutorial: overall satisfaction
A one-way ANOVA on the ratings overall satisfaction from working with the
SPV tutorials yielded no significant differences between groups (F(2, 107) = .395, p =
.675). The mean rating across the groups was M = 3.8, which suggests that most
participants tended to like, rather than dislike, their overall experience (Figure 20). The
means and standard deviations are presented in Table 19. The additional analysis with the
exclusion criteria applied demonstrated a similar result (Table Q2).
153
Ratings of tutorial: helpful
A one-way ANOVA on the ratings of how helpful the SPV tutorials were
demonstrated no significant differences (F(2, 107) = 1.013, p = .367). The mean rating
across the groups was M = 4.1, which suggests that most participants indicated that they
found the tutorials helpful for learning (Figure 21). The means and standard deviations
are presented in Table 19. The additional analysis with the exclusion criteria applied
demonstrated a similar result (Table Q2).
Table 19. Data for the satisfaction ratings (SPV)
Rating of tutorial:
overall satisfaction
Rating of tutorial:
helpful
Rating of tutorial:
entertaining
Rating of tutorial:
engaging
Rating of tutorial:
value as a learning tool
N
Mean
Std.
Deviation
ANOVA
Sig.
ST
34
3.8824
1.00799
.675
AN
40
3.8250
.93060
RT
36
3.6944
.78629
Total
110
3.8000
.90665
ST
34
4.2059
.64099
AN
40
4.1000
.70892
RT
36
4.0000
.41404
Total
110
4.1000
.60502
ST
34
2.6765
1.03633
AN
40
3.0000
.84732
RT
36
3.0556
.89265
Total
110
2.9182
.92995
ST
34
3.1471
1.01898
AN
40
3.5500
.87560
RT
36
3.2778
.84890
Total
110
3.3364
.92148
ST
34
3.9706
.83431
AN
38
4.0263
.82156
RT
35
3.7143
.75035
Total
107
3.9065
.80719
.367
.184
.155
.221
154
Rating of tutorial: entertaining
A one-way ANOVA revealed no significant differences between the groups with
regard to the ratings of how entertaining the tutorials were (F(2, 107) = 1.718, p = .184).
The mean rating across the groups was M = 2.92, which suggests that many participants
did not find the tutorials particularly entertaining (Figure 22). The means and standard
deviations are presented in Table 19. The additional analysis with the exclusion criteria
applied demonstrated a similar result (Table Q2).
Rating of tutorial: engaging
A one-way ANOVA revealed no significant differences between the groups with
regard of the ratings of how engaging the SPV tutorials were (F(2, 107) = 1.896, p =
.155). The mean rating across the groups was M = 3.34, which suggests that many
participants did not find the tutorials very engaging (Figure 23). The means and standard
deviations are presented in Table 19. The additional analysis with the exclusion criteria
applied demonstrated a similar result (Table Q2).
Rating of tutorial: value as a learning tool
A one-way ANOVA of the ratings of the SPV tutorials as learning tools yielded
no significant differences between groups (F(2, 104) = 1.533, p = .221). The mean rating
across the groups was M = 3.91, which suggests that many participants agreed that such
tutorials were valuable tools for learning (Figure 24). The means and standard deviations
are presented in Table 19. The additional analysis with the exclusion criteria applied
demonstrated a similar result (Table Q2).
155
Figure 20. Ratings of tutorial: overall satisfaction (SPV)
Figure 21. Ratings of tutorial: helpful (SPV)
156
Figure 22. Ratings of tutorial: entertaining (SPV)
Figure 23. Ratings of tutorial: engaging (SPV)
157
Figure 24. Ratings of tutorial: value as a learning tool (SPV)
Summary
Overall, the analysis of the Experiment 2 data for the research question 2 did not
find any significant differences in the ratings of the satisfaction aspects of the tutorials
across the groups. The combined ratings from the three groups demonstrated that many
participants liked their overall experience working with the tutorials and they mostly
found them to be valuable and helpful learning tools. However, the participants gave
overall low ratings for how entertaining and engaging the SPV tutorials were, suggesting
that some of them did not find these tutorials entertaining or engaging. If compared to the
results obtained in Experiment 1, the satisfaction ratings were somewhat lower in
Experiment 2, and there were no apparent differences in how different modes of the
tutorials affected the satisfaction ratings.
158
Analysis across the experiments
This section continues the analysis for research question 2 by investigating the
general issues in satisfaction ratings and comparisons of the satisfaction ratings across the
experiments. Thus, this section presents the analysis of the general attitudes the
participants had toward the computer-based grammar tutorials. Further, I was interested
to see whether the same mode of tutorial (ST, AN, or RT) received similar satisfaction
ratings across the two experiments. This analysis could provide some insight on whether
the satisfaction ratings depended on the content of the presentation or solely on its mode.
Finally, the preferences toward a mode of tutorial were analyzed.
General attitude to computer-based grammar tutorials
The analysis of the ratings of learners’ general attitude to computer-based
grammar tutorials was based on the participants’ responses to the statements “I support
the idea that computer-based grammar tutorials should be available to me in my German
language classes” and “If such tutorials were available to me as part of my German
language class, I would use them.” The ratings were based on the five–point Likert scale
and ranged from “Strongly Disagree” to “Strongly Agree”. The analysis of the data
proceeded in two steps. First, the responses from all groups, with no regard to the tutorial
type used in the treatment, were combined to investigate frequencies. As can be seen in
Table 20, about 78% of the respondents either agreed or strongly agreed that such
tutorials should be available to them as part of their German classes (Figure 25) and more
than 80% of the participants agreed that if such tutorials were available to them, they
would use them (Figure 26). The second step in the analysis was the analysis of variance
to investigate whether the tutorial type had an effect on participants’ responses to these
questions. The one-way ANOVA (F(2, 98) = .192, p = .826) on the responses about
159
availability of the tutorials and the one-way ANOVA (F(2, 98) = .132, p = .876) on the
responses about to the potential use of these tutorials demonstrated that there were no
significant differences in the responses to these questions as a function of tutorial type.
In short, most participants in all groups agreed that they would support the idea of
having computer-based grammar available to them in their language classes and that they
would use such tutorials if they were available to them. The additional analysis with the
exclusion criteria applied demonstrated a similar result (Tables Q3 and Q4).
Table 20. Responses about of tutorials’ availability and use (all groups)
Frequency
Percent
“Computer-based grammar tutorials of this kind should be
available in my German language class.”
Strongly disagree
1
1.0 %
Disagree
6
5.9 %
Undecided
16
15.8 %
Agree
43
42.6 %
Strongly agree
35
34.7 %
Total
101
100.0 %
“If computer-based grammar tutorials of this kind were
available to me, I would use them.”
Strongly disagree
4
4.0 %
Disagree
4
4.0 %
Undecided
12
11.9 %
Agree
47
46.5 %
Strongly agree
34
33.7 %
Total
101
100.0 %
160
Figure 25. General opinions about the need for tutorials
Figure 26. General opinions about the use of tutorials for learning
161
Consistency of ratings across experiments
To examine whether the ratings of a specific mode of tutorial were consistent
across the experiments, I used independent sample t-tests to compare the ratings each
type of the tutorials (ST, AN, and RT) received in Experiment 1 (VC) and Experiment 2
(SPV). The data for the comparison were based on the ratings of overall satisfaction from
working with the tutorial; the ratings of how helpful, entertaining, and engaging the
tutorials were; and the ratings of the value of the tutorials as learning tools (Figure 27).
The results of an additional analysis with exclusion criteria applied can be found in
(Tables Q5, Q6, and Q7).
Static tutorials
The results of the independent sample t-tests revealed no significant differences
in various ratings of satisfaction from working with the static tutorials across the two
experiments, which suggests that participants in both experiments gave similar ratings of
satisfaction after working with this mode of tutorial. The data and the results of the
statistical analyses are presented in Table 21.
Animated tutorials
The results of the independent sample t-tests revealed no significant differences
on most aspects of satisfaction. The significant differences in the ratings were found,
however, on the aspect of helpfulness: the participants rated the animated tutorial for the
verb conjugation significantly higher (p = .048) than the animated tutorial for the
separable-prefix verbs. The data and the results of the statistical analyses are presented in
Table 22.
162
Table 21. Satisfaction ratings for static tutorials across the experiments
Group
N
Mean
Std. deviation
Sig. (2-tailed)
Rating of tutorial:
overall satisfaction
VC
33
3.9394
.78817
.797
SPV
34
3.8824
1.00799
Rating of tutorial: helpful
VC
33
4.2424
.50189
SPV
34
4.2059
.64099
Rating of tutorial:
entertaining
VC
33
2.5758
.90244
SPV
34
2.6765
1.03633
Rating of tutorial:
engaging
VC
33
3.0303
.98377
SPV
34
3.1471
1.01898
Rating of tutorial:
value as a learning tool
VC
32
3.9688
.82244
SPV
34
3.9706
.83431
.796
.673
.635
.993
Table 22. Satisfaction ratings for animated tutorials across the experiments
N
Mean
Std. deviation
Sig. (2-tailed)
VC
33
4.1818
.76871
.082
SPV
40
3.8250
.93060
VC
33
4.3939
.49620
SPV
40
4.1000
.70892
Rating of tutorial:
entertaining
VC
33
3.3333
1.02062
SPV
40
3.0000
.84732
Rating of tutorial:
engaging
VC
33
3.6667
.92421
SPV
40
3.5500
.87560
Rating of tutorial:
value as a learning tool
VC
33
4.3030
.58549
SPV
38
4.0263
.82156
Rating of tutorial: overall
satisfaction
Rating of tutorial: helpful
.048*
.132
.582
.112
163
Tutorials with a recording of a teacher
The results of the independent sample t-tests demonstrated no significant
differences on most aspects of satisfaction. Only the analysis of the entertainment value
of the tutorial revealed significant differences (p = .042), when participants considered
the tutorial in Experiment 2 (SPV) to be significantly more entertaining than the one in
Experiment 1 (VC). The data and the results of the statistical analyses are presented in
Table 23.
Table 23. Satisfaction ratings for the tutorials with a teacher recording across
experiments
N
Mean
Std. deviation
Sig.
(2-tailed)
.328
Rating of tutorial: overall
satisfaction
VC
34
3.8824
.80772
SPV
36
3.6944
.78629
Rating of tutorial: helpful
VC
35
4.0286
.82197
SPV
36
4.0000
.41404
Rating of tutorial:
entertaining
VC
35
2.6286
.84316
SPV
36
3.0556
.89265
Rating of tutorial: engaging
VC
35
3.2000
.96406
SPV
36
3.2778
.84890
VC
35
4.0000
.84017
SPV
35
3.7143
.75035
Rating of tutorial: value as a
learning tool
.853
.042*
.719
.138
In short, the satisfaction ratings for the same mode of the tutorial were consistent
across the experiments. In only two instances did the overall ratings differ significantly:
(a) The animated tutorial for VC was rated higher for being helpful than the animated
164
utorial for SP
PV; and (b) the
t tutorial with
w a teacheer recording in SPV wass rated higheer for
tu
being more en
ntertaining than
t
the sam
me mode for V
VC.
Figure
F
27. Sa
atisfaction ratings
r
acro
oss the experriments
165
Preferences for a tutorial mode
By the end of Experiment 2 (SPV), each participant had experienced two different
modes of computer-based grammar tutorials out of three possible ones. One of the
questions on the satisfaction questionnaire asked the participants to indicate their
preference for one mode of tutorial or another. The participants could also indicate that
they had no preference for either mode of tutorial. The collected responses are presented
in Table 24. As can be seen from the data, the animated tutorials for both VC and SPV
were selected more often compared to the ST and RT tutorials.
Table 24. Preferences for tutorial modes by experiment
Tutorial type for VC
Tutorial type for SPV
n
Preference
ST
N=29
AN
19
ST
AN
No preference
1
15
3
RT
10
ST
RT
No preference
6
2
2
ST
15
AN
ST
No preference
13
0
2
RT
20
AN
RT
No preference
13
4
3
ST
17
RT
ST
No preference
9
6
2
AN
12
RT
AN
No preference
4
4
4
AN
N=35
RT
N=29
n
Note: The choice was only possible between the two modes because each participant worked with
only two of the three modes.
166
To investigate the question of whether the expected and observed distributions of
preferences differ significantly, the data from both experiments were combined (see
Table 25) and were subjected to a series of binomial distribution tests. The results of the
binomial tests demonstrated significant differences in favor of the animated tutorials
when the choice was between the animated and the static tutorial types (p = .000) and
when the choice was between the animated tutorials and the tutorial with the recording of
a real teacher (p = .005). The binomial test yielded no significant results for the
preferences between static tutorials and the tutorials with the recording of the real teacher
(p = .50). These results led me to conclude that the participants clearly preferred the
animated tutorial over the other types of the tutorials.
Table 25. Preferences for tutorial modes across experiments
Comparison of the modes
n
Preference
n
ST vs. AN
34
ST
AN
No preference
1
28
5
ST vs. RT
27
ST
RT
No preference
12
11
4
AN vs. RT
32
AN
RT
No preference
17
8
7
Summary for the quantitative analysis for RQ2
Overall, the analysis of the satisfaction ratings across the experiments
demonstrated that most participants overall liked the modes of tutorial and favored the
idea of having the computer-based grammar tutorials as part of the materials available to
167
them in their language courses. Moreover, most participants indicated that they would use
such tutorials for learning. The analysis of the satisfaction ratings across the experiments
demonstrated that, for the most part, the ratings of satisfaction were quite consistent for
the tutorials in the same mode but with different content (VC vs. SPV). Finally, the
analysis of the participants’ preferences toward one tutorial over another demonstrated
that the animated tutorials were preferred more often compared to the static tutorials and
to the tutorials with the recording of a real teacher.
Qualitative analysis
This section presents analysis of the qualitative data for research question 2 “Do
learners report different satisfaction ratings following the three modes of computer-based
grammar tutorials under consideration in this study? What factors or considerations
influence the learners’ satisfaction ratings for the tutorials?” The data for this part of the
analysis came from both open-ended short answers and comments on the satisfaction
questionnaires and from the post-study interviews. The analysis starts from the discussion
of the overall satisfaction from working with the study tutorials. It proceeds with a
general issue of why most learners perceived that computer-based grammar tutorials
could be valuable learning tools. Finally, I discuss the participants’ perceptions of the
advantages and disadvantages of the tutorial modes and their preferences for one mode
over the others.
The data for the qualitative analysis came from the open-ended responses on the
questionnaire and from the post-study interviews. The responses on the questionnaire
varied in their length, from one–word responses to lengthier messages. The responses in
the interviews were mostly long and detailed. In this section, I draw on the longer
responses from both the questionnaire and the interview for the illustrative quotations
because they provide more insight into the learners’ perspectives. All responses appear as
168
a whole or as fragments. The responses from the questionnaires and interviews appear in
full or in fragments and unedited for content, but I fixed apparent misspellings in the
written responses. The questionnaire responses appear without reference to a particular
respondent, whereas the interview responses indicated which participant the information
came from. It should be noted that some interview participants were non-native speakers
of English. This and other information about the interview participants can be found in
Table 14 in Chapter 3.
Overall satisfaction from working with the tutorials
The analysis of the overall satisfaction presented below was based on the
participants’ responses to the following questions on the questionnaire: 1) “Please
describe what you liked about this presentation”; and 2) “Please describe what you did
not like about this presentation.” The analysis also includes the interview responses to
similar questions.
The results of the quantitative analysis discussed earlier demonstrated that most
participants reported that they liked their overall experience working with the tutorials in
this study. With regard to what aspects contributed to the positive ratings, appreciation
for the content and organization of the information in the tutorials was a frequent
response. The participants found that the tutorials were well organized, the instruction
was easy to follow and understand, and the explanations were clear. Although some
responses were short, such as fragments “Clear and precise” or “Simple and to the point,”
others responses were longer; for example “Easy to follow, speaker was clear, material
presented in an understandable way.” A number of participants described these points in
more detail, for example:
I like how clearly the narrator speaks and how clearly and
uncluttered each slide is. It helped clear up the bit of lingering
169
uncertainty I had about verb conjugation this semester. The way it
was presented made verb conjugation seem very easy.
Several participants appreciated the segmentation of the content into simple steps:
I liked the clarity of the speaker and how the complexity of
constructing the sentences with separable and inseparable prefixes
were broken down into simple steps. Like the last tutorial, it was
easy to understand because it made things so simple by breaking
the processes down.
Other positive comments praised that these tutorials provided a review for these grammar
topics. Many respondents underscored that reviewing grammar material is very important
when learning a foreign language. The participants indicated that the work with the
tutorials reinforced their understanding of the target structures. As one participant wrote:
“I like reviewing topics as much as possible because little rules are easy to forget.”
When asked about what they did not like about these tutorials, some responses
suggested that the presentation was rather boring. Here, the responses ranged in the
intensity of the opinion from “somewhat boring,” to “very boring,” and one participant
wrote “It was extremely boring!” While most participants simply offered how they felt,
some responses provided explanations as to what made the presentation boring. Here,
most explanations focused on the slow pace of the presentation. Yet, some participants
who indicated that they found the presentation to be slow also mentioned that for them
this perception was connected to the fact that they were already familiar with the rules:
“Speaker spoke a little too slow for me, but that’s probably because we've already learned
this stuff and it just seemed like review.” It should also be mentioned that many
comments throughout the questionnaire about the slow pace of the tutorial were rather
positive; for example, the participants suggested that the slow pace of the tutorial helped
them “to digest the information without feeling rushed.”
170
Another aspect of the tutorials that prompted some negative responses was the
lack of interactivity. The participants mentioned that they would prefer to type their
answers in the practice segments, rather than complete them in their heads. Some learners
also suggested that if the tutorial included such interactive tasks, they would prefer to
receive immediate feedback on their responses. Here is an example that nicely sums up
the responses of this kind:
I would have liked to interact with the program. Meaning when I
was asked to conjugate the verbs on my own, it would have been
better [if] I could have [written] in the conjugations instead of
doing them in my head, and have the computer tell me if I was
incorrect or right. I like to see my work.
Based on the issues presented above, I concluded that the study participants had
overall a positive attitude to the study tutorials. They enjoyed working with them because
the well–organized presentation helped them to review the rules, and they found it to be
very useful for their language learning. The aspects that decreased overall satisfaction
from working with the study tutorials refer mostly to the slow pace of the tutorial that
made it appear boring and to the fact that the tutorials provided no opportunity to interact
with the program by typing in the answers and receiving immediate feedback.
Views on the value of tutorial as a learning tool
The results of the quantitative analysis presented above demonstrated that the
participants generally consider the computer-based tutorials to be valuable tools for
language learning. This section provides the analysis of the data to delve into the
learners’ reasons behind their responses to this item on the questionnaire. The data for
this analysis come from several questions on the questionnaire: 1) “Please describe what
you see as advantages of this kind of the computer-based grammar tutorial in general,” 2)
171
“Please describe what you see as disadvantages of this kind of the computer-based
grammar tutorial in general,” and 3) “Please explain why or why not you would use such
tutorials,” as well as from similar questions during the interviews.
Perceptions of knowledge improvement
One of the main reasons that participants found these tutorials valuable tools for
language learning was their perception of their knowledge improvement. It was
noticeable in the responses throughout the survey that the participants felt that their
knowledge improved after working with the tutorials. Earlier I have provided examples
of the participants’ responses that demonstrate that working with the tutorials helped
them review these grammar topics. But some participants also mentioned that for them it
was more than just a review. For example, as one participant wrote: “I liked that it helped
review some things we went over last couple chapters and also introduced me to some
new things. It was [not] all review, I actually learned some things I didn't know.” Another
participant emphasized that although the topics were familiar, it was different from the
explanations in class because the tutorials presented the specific rules:
I liked that the different rules of the verbs were organized and
specific. In class we were never presented with those specific
rules, but we learned through experience what to do so that it was
more instinct than memorizing rules. However, I feel more
confident in my conjugations now that I know that there are
specific rules like that.
Taken together with the quantitative responses discussed above, most participants
reported that their knowledge of the rules improved substantially, which may have
contributed to the high ratings of the value of these tutorials as learning tools.
172
Advantages and disadvantages of tutorials in general
The questions that asked the participants to reflect on the advantages and
disadvantages of the computer-based tutorials in general were prefaced by a short
introduction: “Today you saw an example of a computer-based grammar tutorial. Such
tutorials can focus on different topics in grammar. The next several questions address
computer-based grammar tutorials of this type in general.”
Advantages
With regard to the advantages of these tutorials, the participants mentioned most
often that tutorials individualize the learning experience: “It provides a more one–on–one
learning environment,” or “It will be like a tutor, one–on–one where I get individualized
attention.” Many suggested that the computer-based grammar tutorials are convenient
because they allow learners “to work at their own pace” and, if needed, they can “pause
and give themselves more time to figure out the problems.” Others valued the option to
watch a video several times to reinforce their knowledge. For example, one person
described his opinion on the advantages of such tutorials in the following way: “Students
can view them as many times as they need. In some cases, I need to have things
explained to me multiple times to truly understand.” Many mentioned that if such
tutorials were available online, they could access them from anywhere and enjoy learning
from “the luxury of your own home.”
Two themes emerged from the responses about advantages of the tutorials: The
participants tended to compare the tutorials (a) to their in–class experience, and (b) to
learning from the textbook. With regard to the comparisons with classroom learning,
some participants indicated that one of the advantages of a tutorial is that learning with
tutorials is “more regulated” because explanations do not depend on the expertise or
173
teaching style of the teacher: “Easy to understand, all students would have the same
understanding of the grammar no matter who their professor was.”
Another important factor for this line of comparison was that in class the teacher
needs to repeat information several times until all learners understand it. This issue was
evident in the responses from two perspectives. First, learners who felt that they need
longer to learn suggested that tutorials could help them keep up with the class. As one
participant explained it, “it allows students who are struggling to understand parts of the
course go back and get a more thorough lesson.” Second, this issue also came up from the
point of view of students who learn faster and dislike being slowed down by other
students. One interview participant explained why tutorials seem like a less distracting
way to learn because in class her learning depends on other students: “In the class there
are some people who can’t understand the lecture very well. Even though we know it
already sometimes, teacher explains [grammar] for a long time. It makes us a little bit
bored” (Participant 5).
When comparing the tutorial to the textbook, some suggested that the clear
explanations in the tutorial could be advantageous compared to how the information is
presented in the book. Several participants mentioned that grammar in the textbook
seems not so well organized and that they often fail to notice important grammatical
information when it is presented in little boxes on the side of the page. For example, as
Participant 2 described the advantage of the tutorial over the book:
You essentially have in a textbook the same information as you do
in the tutorials that we were given, but it’s generally in a little box.
And to the extent that you don’t need to sit there and make
yourself read it, it’s nice. And there is a flow to the tutorial and it’s
easier to pay attention to something that keeps going.
174
Later in the interview, this participant also offered a somewhat metaphorical vision of
this issue when he compared the tutorial to a caring mother:
I think it would work better when you’re supposed to do your
homework and you’re lazy and you don’t want to read the
grammar box, essentially these tutorials seem to me like they are
grammar box but out loud. So, it’s like your mom takes you by the
hand and she reads it to you because you are too damn lazy to do it
yourself. (Participant 2)
To summarize, it seems that several aspects shaped the participants’ opinions
about the advantages of the computer-based grammar tutorials. First, the participants
reported that working with the tutorials in this study helped them learn; therefore, many
suggested that these and similar tutorials would be valuable learning tools. The
participants also appreciated that such computer-based tutorials make learning more
individual by allowing them to proceed at their own pace. Assuming that the tutorials
were available online, the participants valued the easy access to them. Finally, comparing
the tutorials to grammar learning in class, many found the tutorials to present a more
organized, time-efficient, and reliable way to learn. When comparing tutorials to
grammar presentations in their textbook, many found that the tutorial was simpler than
reading the information in the textbook because the tutorial had a flow and a strong visual
appeal.
Disadvantages
With regard to the disadvantages of computer-based grammar tutorials in general,
the responses of the participants clustered around three main points. The following quote
nicely summarizes them:
175
The first disadvantage is the ability to do something else while
pretending to listen. There is also the lack of personalized help.
Most importantly however is the inability to ask questions or
receive clarification.
The first disadvantage of the tutorials is that it is easy to get distracted when
learning outside of the class because in class the teachers and classmates help keep the
focus on learning. Here is how one participant explained it in the interview:
I think in class you have less risk of getting distracted. Probably
because you actually have the teacher there watching you. You
kind of feel the obligation to pay attention. And in this [tutorial],
nobody is watching you so you can doze off if you want to, not to
complete or listen. (Participant 1)
The second disadvantage is that, unlike the teacher, the tutorial cannot provide
personalized help. In contrast to tutorials that are created for a general audience, an
advantage of the teachers is that they can tailor their explanations to their learners’
backgrounds and previous knowledge. An excerpt of an interview response makes this
point:
When you have a question, if we ask the teacher, the teacher
[gives] us the answer using the principle or using the experience
how he or she taught us in the class. So it is more helpful to
remember using the atmosphere or the context we learned in that
class. (Participant 5)
Similar to other responses of this type, this example demonstrates that learners value that
teachers can modify their explanations based on their knowledge of a particular class or a
particular student.
176
The third frequently mentioned disadvantage of tutorials is the inability to ask
questions, get feedback or extra help grasping the material. As before, the participants
suggested that this is a great advantage of the regular classroom that the teacher is always
there to help when confusion occurs. For example, as one participant wrote: “If
something isn't explained well enough for someone in the video, they don't have a teacher
to interact with that can help them resolve their issue.”
Overall, the participants were fairly unanimous in what they pinpointed as the
disadvantages of the tutorials. Thus, many felt that a disadvantage of the tutorial
compared to the class is that it cannot provide personalized help based on learners’
previous knowledge. The participants also criticized that the tutorial does not allow them
to ask questions as they would in class. Finally, many suggested that when learning on
the web, they would be more tempted to get distracted from learning, while in class they
feel an obligation to listen to the teacher.
To sum up the participants’ perspectives on both advantages and disadvantages, it
seems that the responses provided grounds for several conclusions. The participants
valued the role of computer-based tutorials in allowing learners to work at their own pace
to improve their knowledge of grammar. As expected, many mentioned the benefits of
having tutorials online in terms of easy access. Also, some participants reported that
explanations in the tutorials could be more reliable, consistent, and visual than those
provided by the teacher or the textbook. Among disadvantages, the participants
mentioned that without the teacher’s control it might be easy to get distracted from the
presentation in the tutorial. Also, the tutorial could not answer questions like a real
teacher could.
177
Situations and conditions for using tutorials
The analysis of the multiple-choice responses on the questionnaire clearly
demonstrated that the majority of the participants (more than 80%) agreed that they
would use the tutorials if they were available to them as part of their language classes. In
this section, I look into considerations behind the responses to these questions based on
the responses to the questionnaires and the interviews. The data for this analysis comes
from the participants’ responses to the question “Please explain why or why not you
would use such tutorials” on the questionnaire and to the similar question in the
interviews.
Reasons for using tutorials
The participants who agreed that they would use the tutorials described various
situations that broadly fall into four main purposes: (a) review before a test or quiz, (b)
preview before a class to get familiar with new information, (c) review after the class to
consolidate knowledge, and (d) clarify challenging topics. The review for a test or a quiz
was one of the most frequent answers to this question both on the questionnaire and
during the interviews. For example, some participants suggested that it would be a more
convenient way to review compared to other options: “It seems like an easy way to
review for tests. Also, it is a lot more clearly displayed than notes in my notebook and
even pages in our German textbook.” Another participant explained during the interview:
I would definitely use the tutorial before a test, especially a
semester test where you have to review everything. And that type
of a tutorial, since it’s so concise and right to the point, it would be
a quick way to memorize and refresh what you already know.
(Participant 12)
178
Some participants indicated that although they liked the review, they would also
like to have such tutorials available before the teacher explains the grammatical topics in
class; in other words, to preview the material. Although several people expressed the
same idea, there were different reasons behind this response. Some suggested that the
purpose of the preview would be to avoid reading the assigned grammar in the book.
Others explained that the preview could help them become familiar with the topic
beforehand so they could use the time with the teacher to clarify confusing points and ask
questions. Some of the participants who responded this way also indicated that it would
speed up the grammar instruction in class. Here is an example from the interview:
It’s a good tool because you know people are [emphasized with
voice] going to use it. I think people will [emphasized with voice]
use it, if they know “Hey, I can get ahead of the class right away.”
That way, the next day I’m prepared and maybe we can move
along quicker in the class because we are already somewhat
prepared and we just got to brush through things and then we can
keep going and we’re moving along at a better pace. (Participant
11)
The next use of the tutorials that was mentioned frequently was to use them as a
tool to clear up confusion or to catch up after missing the teacher’s explanations in class.
Some participants in this category suggested that they do not usually ask questions to
avoid slowing the class down: “There are certain parts that I struggle on, that other
students don't, and vice versa. I would rather have these available online for review than
hold up the class by asking questions about past material.”
Finally, many participants also mentioned that they would use the tutorials as a
review tool after the teacher presented new information in class. In other words, these
learners believed that a review of the rule explanations could help them consolidate their
knowledge. The participants suggested that such review could help them “reinforce the
lessons from class,” and that such a review could even “spark questions” that they could
179
ask their instructor the next day. Several respondents also suggested that they would use
the tutorials both as a preview and review, if it helped them learn: “It would be nice to
have it initially before the lesson, and then the professor explain the grammar in more
detail and then again after the lesson to cement the material more deeply in my mind.”
Overall, most participants indicated that they see the main purpose of these
tutorials as a review, whether they review grammar before a test, or because they missed
a class, or to help them learn grammar topic in focus. Others suggested that they would
watch such tutorials before the teacher explained the grammar topic in class to help them
get ahead and to use class time with the teacher for clarification and practice.
Reasons for not using tutorials
Although most participants agreed that they would use the computer-based
grammar tutorials for learning, there were several participants who disagreed. Also,
among those who agreed that they would use them sometimes, the response would list
certain conditions for the use. Aside from several learners who stated that they simply
dislike learning with computers or find it unnecessary, most responses of this type
revolved around two issues. First, many emphasized that although the tutorials in the
study seemed like valuable learning tools, the respondents would welcome them only as a
supplement for classroom instruction, but not as a replacement for the teacher. While
most responses of this kind simply stated that they still prefer to have the teacher explain
grammar in class, one participant felt strongly about this issue by writing “I do not want
them to replace real teachers!”
The second issue concerned the accountability for watching the tutorials. Thus,
some indicated that they would use them only if it were either part of the class or part of
the graded homework, in other words, something mandatory. Others stated the opposite,
suggesting that they would accept working with this tutorials only on an optional basis.
180
Although most participants supported the optional use of the tutorials, the voices of those
with opposing views were quite compelling. The underlying theme to their responses was
that there needs to be accountability of some kind for working with the tutorials. Here is
how one participant expressed it in the survey:
I can't see me taking time out of my day just to watch tutorials like
the one presented. I would love it if these were required or part of
the class period. We could learn from the tutorials and if we had
any questions we could ask the teacher and have him explain
things more in depth.
Another participant explained it similarly during the interview:
I hope it will be mandatory because if it’s optional maybe I won’t
do it. Because I will use my time to do something mandatory first.
So, if that is not mandatory, it is optional, maybe sometimes I
would do something else first. So, I hope it’s mandatory. And if
it’s mandatory I think I hope that will count for some grade or
score, then I will absolutely do it. It’s best for me. (Participant 10)
Although this group of the respondents made a case for making work with the
tutorials mandatory, a majority of the suggestions were in favor of keeping such tutorials
optional. The reasons for keeping it optional were twofold. A number of participants
suggested that they would watch the tutorials on an as-needed basis. For example, one
participant explained:
I think making it optional is helpful because then the people who
want to work hard and the people who deserve a good grade and
deserve being ahead in the class are the people who went out and
tried to get ahead. (Participant 11)
181
However, others explained their preference for the tutorial being optional by maintaining
that they simply do not like mandatory assignments or they find mandatory assignments
“irritating” or even “repulsive.”
To sum up, although many participants agreed that they would use the tutorials if
they were available to them, this agreement came with some conditions attached. Thus,
many only welcomed the tutorials as a supplement for the class and not as a replacement
of teachers’ explanations. If tutorials were available, the learners would mostly use them
as a review before a test or after the teacher explained a new topic in class, for the
purpose of either getting a clarification or simply reinforcing new knowledge. Some
learners suggested that they would use tutorials as a preview to be better prepared for
class. The issue of whether the tutorials should be optional or mandatory emerged
throughout participants’ responses on the questionnaire and I included it in the interview
to examine it in more detail. Here, there was a clear difference of opinion: Some learners
made a case for keeping them mandatory to create an incentive in terms of grades,
whereas others expressed their clear dislike of mandatory assignments of this kind and
preferred to have the tutorials as one of the options to learn grammar.
Advantages and disadvantages of tutorial modes
This section presents the analysis of the participants’ perceptions of the three
modes of tutorials (static, animated, and with a recording of a real teacher) and the
comparisons among them. As explained in Chapter 3, each participant could work with
two out of the three possible modes of tutorials; for instance, only static for VC and
animated for SPV, or only animated for VC and teacher for SPV. During Experiments 1
and 2, the survey prompted the participants to describe what they liked or disliked about
the modes; also, during Experiment 2 they were asked to compare their experience in two
experiments and report on their preferences.
182
Static mode
The static tutorials did not receive as much attention or comments as did the other
two modes. As for the disadvantages, this mode received slightly more responses about
the slow pace and boring manner of presentation compared to the other two modes. Also,
the participants indicated how hard it was for them to focus because the activity on the
screen was not “stimulating” enough. And when the participants described what they
liked about this type of the tutorial, they mostly focused on the organization of the
content, the clear design, and the easy–to–understand instruction. However, these are the
aspects that describe the overall advantages of the tutorial, rather than the ones of the
static mode in particular. In short, because the static tutorial did not include any
enhancements, its advantages were intertwined with the quality of the content. The
disadvantages, however, were more connected to the manner of presentation when the
absence of some visual or interactive enhancements made it hard for the participants to
focus on the content.
Animated mode
The animated mode of tutorial received many positive comments, which broadly
fell into the two categories: Animations helped learners focus, and they helped explain
the rules better. Many responses both on the surveys and during interviews suggested that
the animated text in the tutorials helped them concentrate on what was happening on the
screen. For instance, as some participants indicated on the survey, the moving of the text
made it easy to follow the instruction, stay engaged, and pay attention to the important
information. Several participants also mentioned that they appreciated the “flow” or the
“progression” they saw in the animated tutorials. For example, an interviewed participant
explained that he “liked that things came up one at a time in the slides and that they
183
weren’t just all popped up at once. It was easier to pay attention to the progression of
what you were learning” (Participant 15).
Other positive responses focused on the value of animations for learning. Here,
various aspects of animation received support from the participants. Thus, many
emphasized that the animations highlighted important aspects of the information and
“zoned in on” them making them more salient. For example, as one participant wrote in
the questionnaire “Without the animations, I might not have noticed the changes
occurring within the words.” Another explained in the interview that “it was helpful to
understand because it shows you how you have to change the verb. So the animation
made us focus on the change and that was helpful” (Participant 5). Others suggested that
the animations helped them understand the topics better because it was “visually
stimulating” as an illustration of how to apply the rules. As one participant explained in
the questionnaire:
I really liked the animations because they were a visual
representation of what happens when you conjugate a verb.
Showing the letters literally break off the words and "splat" was a
great way of helping it stick in my brain.
During the interviews, the participants also often mentioned that the animations helped
them remember rules by providing a visualization that could be applied later during
practice. As one participant described it in the interview:
I think sometimes especially foreign language students have
problems visually seeing how the words go together. And just the
different coloring of the different parts of the verbs, I thought that
was really helpful. It would probably be really helpful to a lot of
students. (Participant 15)
184
Another participant responded on the questionnaire “I consider the animations
vital because it created a visual cue for the verb conjugations that we can replicate
individually and on our own terms,” suggesting than animation provided a mental model
for learning the rule. One of the interviewed participants also expressed a similar idea by
saying “When I do it by myself I can just do the same thing this animation is doing”
(Participant 6) when he talked about why he found the animated tutorial very helpful for
learning. When answering the same question, another interviewed participant expressed
appreciation that the animations provided her with a way to visualize the rules:
If, for example, there was a tutorial and I didn’t understand the
information [in it], I would have a hard time trying to remember
the information and trying visualizing it in my head. If they gave
me a way to visualize it, that helped me, that would help me to
remember the information. I think coming up with both things
when you don’t know the material already—that would be pretty
hard. If you know the material well enough, after some time, you
come up with your own way to visualize it. (Participant15)
Overall, the learners’ appreciation with the external visualization provided to
them by the animations was in line with what I hoped to discover during the analysis.
However, some responses of this kind brought up another factor of learning through
external visualizations, which was less pedagogically encouraging because what some
participants appreciated was that learning with animations requires less thinking. For
instance, one of the participants suggested, “I think the visual form is better because
you’re spending less time thinking and pinpoint what the program is talking about”
(Participant 1). This idea was echoed by another interview respondent, who honestly said
“I’m kind of lazy. I guess I just like it to be laid out there for me, you know, with as little
thinking as possible. So, I liked it how they just presented it for me” (Participant 14).
The tutorial mode with the animated text did not trigger many negative responses
that were specific to the animations. Some participants indicated that the presentation was
185
slow paced and boring, something even the animations could not change. Other
participants responded that although the animations did not add anything valuable to the
learning itself, they just made it somewhat more entertaining.
In short, the tutorial with the animated text and a voice-over narration seemed to
receive more positive than negative comments. The positive responses revolved around
the value of animations for helping learners stay concentrated on the instructional
presentation and helping them understand and remember the grammar rules. The negative
responses underscored that the animations made the presentations only slightly more
entertaining, if at all, and that adding the animations was not more valuable for learning.
Recording of a teacher mode
As in case with the static and animated tutorials, some positive responses to the
mode with a recording of a teacher underscored the quality of explanations and some
negative responses expressed disappointment in the slow pace. However, the responses
that focused on the value of having a recording of a real teacher clearly fell into two
groups: those from the participants who liked it a lot and those who did not like it at all.
The participants who liked this mode underscored how personable they found this
presentation. Thus, many suggested that it was a clear advantage to have a real person in
front of them because it helped them concentrate, and it also created the feeling of a real
language classroom. For example, as one participant responded on the questionnaire:
“Since there is a real person, I could concentrate on the presentation.” One of the
interviewed participants also explained it in more detail:
I would say because it was more personal. I could see like there is
an actual human there as opposed to just a computer monitor
talking to me, you know how it is. And I don’t know, I felt like she
was in the room with me. It was more comfortable instead of just
like a robot talking to me. (Participant 14)
186
And here is how another participant expressed a similar idea on the questionnaire:
Having a real teacher adds so much more to the experience as
opposed to just having a recording of someone with no experience
or even a computer-generated recording. It almost felt as though I
were learning it in the classroom, though of course this is not the
case.
Also, one participant in the interviews (Participant 5) said that she liked this mode
because it was like a “cyber–lecture,” a format that she was familiar with from her studies
at her home university in Korea.
This participant also brought up another issue in favor of having a recording of a
real teacher. Similar to several other participants, she suggested that because a human
person in the video creates a feeling of a real classroom, she felt compelled to pay
attention. The following is a fragment from our interview:
Researcher: If next time you worked with a tutorial and you had a
choice, which one would you choose?
Participant 5: The teacher. Because I think when there is someone
who watched us, we tend to focus on something or we tend to
study hard. It’s the same principle with that one, the teacher one, I
think.
Researcher: It’s still not real though. She can’t look at you and say
“Hey, focus!”
Participant 5: But the feeling may cause you do that.
Thus, the participants, who reported to like this mode, underscored that it made
the instruction feel more personable because there was a real human person teaching. The
presence of a real teacher also made learning more comfortable because it resembled
187
traditional classroom learning. Also, similar to the regular class, the presence of the
teacher in the instructional presentation kept learners engaged.
The presence of the human teacher was the main aspect that contributed to the
positive responses about this tutorial mode. At the same time, however, it was also the
main aspect that contributed to the negative responses about the same mode. Thus, some
participants expressed their frustration about having a human teacher in the tutorial. The
frustration was caused by a false effect of a real classroom that the tutorial created. The
participants suggested that an important aspect of learning from a real teacher in a real
classroom is the ability to ask questions and receive answers and that such interactivity
was lacking in the tutorial because the teacher in it was “just a presence” (Participant 2).
The participants did not like that the teacher could not directly engage them into learning
and also that they could not engage her into communication. For example, as one
participant emotionally stated in the interview: “I like teachers. So the fact that I had a
teacher in front of me who I cannot ask questions of just pisses me off!” (Participant 2).
Other responses against this mode of tutorial focused on how distracting the
human person is in the video. The teacher persona would naturally attract the learners’
attention and distract them from learning. As someone explained: “I guess I get distracted
easily so I would look over at the lady or concentrate on how she speaks.”
To sum up, the participants’ responses clearly were divided between those who
liked the teacher persona and those who did not. The supporters of this mode felt it was
more personal and traditional. Those who did not find that adding a teacher persona was
valuable explained that they felt both uncomfortable because there was only an illusion of
interaction and distracted because they naturally tended to look more at the teacher than
on the slides with information.
188
Comparisons among the tutorial modes
The analysis above focused on the aspects of the tutorials that influenced the
participants’ satisfaction ratings. This section continues this analysis by focusing on the
aspects of the tutorials that were important when the participants had to decide on their
preference, or the lack thereof, for one mode over another. When asked about a direct
comparison, some aspects of the tutorials came at the forefront more clearly.
Preference for the static mode
None of the participants selected the static mode in preference to the animated
mode. But one of the interview participants explained that he perceived an advantage of
the static mode over the animated one by saying that “the advantages with the animated is
that it keeps you concentrated. But with the static, you are more concentrated on the big
picture instead of on whatever is moving” (Participant1). Another interviewed participant
mentioned that he felt somewhat “biased” when making his choice after he watched the
animated tutorial first:
And without the animations it really seems dead. Where it was
something that anyone could come up with and it doesn’t seem
nearly as valid compared to the one with the animations. So, I
think coming from the first one to the second one it feels like that
there is a huge drastic change all of a sudden. (Participant 11)
In the choice between the static and the teacher mode, the static mode was
preferred by those who found the teacher to be distracting. The responses were similar to
the following response from the questionnaire: “I liked the static text better because
having the presenter on the screen was very distracting. My eyes were drawn towards her
and away from the slides, and found myself fighting to pay attention to the slides.”
189
In short, the static mode of tutorial was not popular with participants, unless it
was selected as an alternative to the other mode that was clearly disliked.
Preference for the animated mode
As mentioned before, the animated mode was always selected when the
participants were comparing it to the static mode. Here, the aspect that was most
frequently cited to explain the choice in favor of the animated mode was that the
animations made it more entertaining and engaging to watch the tutorial. For example, as
this participant explained: “I feel like the colors and the action of the words on the screen
made it easier to pay attention. It kept me more entertained and engaged.” Another aspect
that was mentioned several times was that the animations helped students learn the rules
better. Here is an example from an interview response:
I would say the animation helped me actually to look at the screen
and I felt like I might have skipped on the static one, just cause I
wasn’t really engaged with what was going on on the screen. I felt
like it was doing it by itself, not really teaching me where the verb
endings go, what your brain has to do when you’re trying to make
a change for like a verb ending. (Participant 6)
When the animated mode was compared to the mode with the recording of a real
teacher, it was often selected because the participants felt it was more engaging and
helped them understand the rules better. But the other frequent reason for preferring the
animated mode over the teacher mode was that the participants found the teacher to be
too distracting. Here is an example of such a response from the questionnaire:
The animated text was just the one screen with the text, and I felt
like it wasn't trying to compete for my attention with the teacher.
Having the teacher next to the slideshow split my attention
between the teacher and the slideshow.
190
In short, the animated mode was clearly preferred over the static mode because
animations helped engage the participants more. This mode was also preferred when the
recording with the teacher was an alternative, but mostly because the participants had
negative attitudes toward the teacher mode.
Preference for the mode with a recording of a teacher
When the participants could indicate their preference and the teacher mode was
compared to the static or animated mode, the responses mostly emphasized that the real
person made it more engaging. For example, similar to other responses, one participant
explained that “it was more engaging to see a person speaking and explaining the lesson.
The lesson without a real person felt too distant.” One participant also indicated that
having a “real guide for the tutorial helps to give the atmosphere of a real classroom
situation. This in turn helps to retain knowledge and interest in the material.”
In short, the teacher mode was selected in preference to the other two modes by
those participants who found that having a real teacher in the video reminds them of a
real classroom experience and engages them more into learning.
Summary for research question 2
The quantitative analysis of the satisfaction ratings demonstrated some mixed
results. Overall the modes received similar ratings with no statistically significant
differences. However, the animated mode in Experiment 1 (VC) was rated significantly
higher for entertainment value and for engagement than the other two modes, and also
significantly more helpful than the tutorial with the recording of a real teacher.
The qualitative analysis provided more insights into the reasons behind the
learners’ satisfaction ratings for the tutorials. It seemed that the participants liked the
modes that were enhanced with animations or the recording of a real teacher better than
191
they did the static mode. With regard to the positive aspects of the tutorials, the
participants appreciated that the content was well organized and that the tutorials helped
them learn these grammar structures in a way that was easy to understand. Many
participants found that animations were valuable additions to the tutorials because they
helped them to concentrate on the instruction and remember the rules better by presenting
them visually. The participants’ responses seemed divided with regard to the value of
having a real teacher in the tutorials. Some participants found it to be more engaging and
comfortable, whereas others felt that they were distracted and that the teacher failed to
engage them in learning. On the whole, though, many participants supported the idea of
having computer-based tutorials made available to them in their language classes. Many
noted, however, that they would welcome such tools only as a supplement for classroom
learning, not as its replacement.
Summary
In summary, the analysis of the data pointed to several trends in the responses to
the research questions in this study. First, in both experiments, the work with the study
tutorials improved learners’ knowledge; however, different tutorial modes did not have a
specific impact on this improvement. Second, the participants reported their satisfaction
from working with the tutorials; however, their satisfaction ratings depended on various
factors. These issues are discussed in more detail in Chapter 5.
192
CHAPTER 5
DISCUSSION
This chapter concludes this study by discussing the results of the quantitative and
qualitative analyses. The discussion begins by summarizing the findings of each research
question, and at the same time it provides a perspective on how these findings fit within
current research and practice. Then I offer my view on the theoretical and practical value
of my research and mention the limitations of the study that may restrict the applicability
of the findings. Finally, I offer a summative conclusion to this study.
Findings for research question 1
The first research question in this study investigated the effects of the three modes
of computer-based grammar tutorials on the learners’ knowledge of two structures in
German. By looking at knowledge improvement from multiple perspectives, the
quantitative analysis demonstrated that the participants’ knowledge of the target
structures improved significantly, no matter which mode of tutorial the participants
worked with. Further, the analysis of the ratings on the questionnaire suggested that after
working with the tutorials, most participants felt that their knowledge improved, again
with no differences according to the mode of tutorial. Thus, these findings point to the
effect of instructional intervention, which is in line with the research on explicit
instruction that demonstrates that this type of instruction promotes higher accuracy (i.e.,
Chaudron, 1991; R. Ellis, 1996; Norris & Ortega, 2000).
Although the analysis demonstrated significant effects for knowledge
improvement from the pre- to the post-test, no significant differences were found with
regard to the tutorial mode. Thus, it did not matter whether learners worked with the
basic type of tutorial (static text and a voice-over narration), with a tutorial enhanced by
193
animations, or a tutorial enhanced by a recording of a real teacher—they all improved
their knowledge. This finding is in line with the previous research that found no
significant effects of animations or recording of a human teacher (Caplan, 2002; Moreno
et al., 2001). Similar to Caplan (2002), I believe that a possible reason for no differences
between the static and animated modes was the efficient organization of the content.
Thus, in my study, the participants appreciated that the content was well organized and
that grammar rules were presented in an informative and easy–to–understand manner.
Because all tutorials were based on identical content and were presented within the same
period of time with the information segmented in the same manner, the fact that no
statistically significant differences were found with regard to the mode of presentation is
not that surprising.
In short, the findings for this research question demonstrate that there is a strong
positive effect of the computer-based grammar tutorials of various kinds on learners’
knowledge of the target structures that were the focus of this study.
Findings for research question 2
The second research question focused on the effect of the modes of tutorial on
learners’ satisfaction. The analysis of the quantitative and qualitative data for this
research question demonstrated that the participants overall liked working with the
tutorials. As presented previously, the participants found that the organization of the
information in the tutorials helped them review the grammar topics efficiently. Most
participants agreed that tutorials of this type are valuable learning tools that should be
available to them for language learning. However, in line with previous research (Felix,
1997), the analysis of the qualitative data further found that most learners consider such
tutorials valuable as a supplement for the classroom learning, rather than as a potential
replacement.
194
The effects of the modes were somewhat mixed. The quantitative data
demonstrated some differences in the ratings in favor of the animated mode when the
learners found it to be more helpful, engaging, and entertaining in Experiment 1 (VC),
but such differences were not found in Experiment 2 (SPV). Moreover, when compared
across the experiments, the animated mode for VC received significantly higher ratings
of helpfulness than the same mode for SPV. A possible explanation for this difference
may stem from the different nature of the target structures. As discussed previously, verb
conjugation is a morphological topic, whereas separable-prefix verbs are a matter of
syntax. One can assume that it may be easier for learners to imagine the changes in the
position of a target structure in a sentence (syntax) than changes in its form
(morphology). The qualitative data provided a more insightful picture, in that both
tutorials that were enhanced by animations or by a recording of a teacher overall received
more positive responses than the basic static presentation.
My goal with this research question was not only to look at various satisfaction
ratings, but also to delineate factors or considerations that influence these ratings. Based
on the analysis of the data, it seems that there are four main factors that influence
learners’ level of satisfaction and shape their attitudes towards the computer-based
grammar tutorials.
The first factor is the practical value of the tutorials. For learners, to have a
grammar tutorial available as an option means that they have more tools to choose from
to improve their knowledge. Thus, many respondents suggested that they need tutorials of
this type to help them study ahead to be better prepared for the class or to review if a
problem arises. Similar to Zhang et al. (2004) and Gilby (2004), some participants
reported that they tend to not ask questions in class in fear of slowing down the rest of the
students. For these participants, the availability of tutorials could mean additional help
without losing their dignity.
195
The tutorials also seem practical from another perspective. In contrast to Tse’s
(2000) finding that learners get frustrated when class instruction focuses on the top
students, several participants in my study expressed their frustration that classroom
instruction is often repetitive to help slower students with the material. In my study,
several participants, who represented the perspectives of both groups, maintained that if
such tutorials were available for study outside of class, then it could level the playing
field during classroom learning.
In a different vein, many participants indicated that having access to tutorials is
more convenient than looking for the same information in the book or trying to contact
their professors outside of their office hours. From either perspective, the learners who
liked the tutorials believed that they make the learning process more comfortable.
The next factor that influenced positive ratings of satisfaction was connected to
the fact that computer-based grammar tutorials afford individualized learning. The ability
to cater to individual differences in learning has often been considered one of the main
advantages of computer-based learning (Ahmad et al., 1985; Dodigovic, 2005; Gilby,
1996; Heift & Schulze, 2006; Oxford, 1995). This aspect came up in a number of
responses when the learners suggested that they liked the idea that they could access the
tutorials from anywhere, and they could pace the tutorial and re-play the content as many
times as they wished.
Interactivity is the next factor that has a strong influence on the satisfaction
ratings; and for this study it was a negative effect. Participants’ negative comments on the
satisfaction from working with the tutorials were closely connected to how much
interactivity they expected from a computer-based program. Many learners suggested that
the tutorials could be improved by adding various kinds of activities, such as typing,
drag–and–drop, or selecting a response, and by receiving feedback on their responses. It
should be noted that I made a conscious choice to exclude interactive features from the
main study, because during the pilot study the availability of the interactive features
196
contributed to the inconsistency in results when some learners made good use of them
and some skipped them completely. However, the suggestion of making computer-based
tutorials interactive is certainly valuable in practical terms and should be applied to
designing computer-based tutorials for language learning.
Finally, the satisfaction ratings seemed to be connected to how engaged the
participants felt during the learning process. Here, the most basic form of the tutorial,
although it showed a positive effect on knowledge improvement, was left behind when
the learners reported on their preferences for a tutorial mode. Thus, both animations and a
recording of a human teacher seemed to enhance the learners’ perception of engagement
with the tutorial. Here, the animated tutorial received more consistent support when
learners reported that animated text drew their attention to important information and that
it visually presented what happens to the words when the grammar rules are applied. The
benefits of the animations, such as their visual appeal that helps make processing of
information easier, have been previously noted in the studies on animation (Caplan, 2002;
Roche & Scheller, 2008) and in the instructional design literature (i.e., Bentracourt, 2005;
Lewalter, 2003; Schnotz, 2002).
With regard to enhancing a tutorial with a human persona, the attitudes were both
negative and positive. For those participants who felt that the teacher persona was
unnecessary or even distracting, this aspect produced lower satisfaction ratings. For
those, who found this type of presentation more personable and comfortable, this aspect
contributed to increased satisfaction. This finding is again in line with the predictions in
the literature. Thus, some suggest that adding a recording of a human person can produce
positive effects because the presence of a human would increase learners’ drive levels
and motivation (Lester et al., 1997; Park & Catrambone, 2007; Zajonc, 1965), whereas
others warn that a human character can be a seductive detail that distracts attention from
learning and increases the cognitive load (Moreno, 2005; Moreno et al., 2001).
197
In short, the findings with regard to the satisfaction ratings demonstrate that
learners have overall positive attitudes toward computer-based grammar tutorials, and
that these ratings are affected by such aspects as the practical value of the tutorials for
learning, the affordances of individual learning, the interactivity in the tutorial
presentation, and the level of engagement in the tutorial.
Limitations of the study
There were three main limitations of my study. My goal of making the study
methodology highly controlled to allow comparisons between the modes produced the
first limitation of the study. As explained in Chapter 1, I selected the two target structures
that were the focus of this study—regular verb conjugation and separable-prefix verbs—
because they are purely grammatical topics and do not depend on the context of the
sentences in which they occur. However, many topics in grammar are intertwined with
the lexical meaning of the surrounding words and the context. In this regard, the effect of
the tutorials on learners’ knowledge that my study demonstrated is limited to the
grammar topics that can be presented as a set of clearly defined rules.
The measures of knowledge improvement are another limitation that stems from
control over the design and measurements. The study instruments elicited constrained
responses on the pre- and post-tests to measure the learners’ knowledge of the rules.
However, the use of tests with constrained responses has been previously criticized for
favoring explicit methods of instruction and, thus, somewhat exaggerating the benefits of
the form-focused instruction. For instance, in his critique of Nassaji and Fotos (2004),
Truscott (2007) asserted that researchers whose studies demonstrate beneficial effects of
grammar instruction rely mostly on formal tests. According to Truscott (2007), the
arguments in favor of grammar instruction can be valid only when improvements in
accuracy are found on tests of communicative ability.
198
The slow pace of the tutorials was the third limitation of the study. Although most
participants reported that they found the pace to be at a comfortable level, many
perceived the tutorials to be too slow. This reaction may have influenced the outcomes in
terms of both learning and satisfaction ratings.
Overall, the limitations of this study do not appear severe enough to jeopardize
the findings of the study. However, they definitely put the findings into perspective.
Theoretical value of this study
In the past, grammar always played an important role in language learning.
Grammar was once believed to be the cornerstone of learning a foreign language.
However, the rigid grammar–translation approaches that characterized language
instruction in the 19th and first half of the 20th centuries could not provide learners with
the liveliness of the real language. In the 1950s, the stiffness of grammar-oriented
teaching was overcome by a variety of more naturalistic learning approaches that
provided a more communicative orientation to language learning. With time, however,
doubts started to emerge from practice and research whether naturalistic approaches
could promote accuracy to the same degree as fluency (e.g., Higgs & Clifford, 1982;
Hinkel & Fotos, 2002; Long, 1996; Richards, 2002; Robinson, 2003; Skehan, 1996). In
the 1980s, this realization triggered a turn to the idea that language learning should
include a focus on form in addition to the focus on meaning (e.g., DeKeyser, 2005;
DeKeyser & Juffs, 2005; Dörnyei, 2009; N. Ellis, 2005; Larsen-Freeman, 2003;
Lightbown & Spada, 2006). Form-focused instruction became popular with practitioners
and researchers because it allowed for a variety of techniques to be applied in the
classroom without compromising the overall orientation to meaning. With time, however,
another set of problems emerged. Incidental focus on form could only provide a
fragmented knowledge of grammar, rather than a systematic one. Further, interweaving
199
focus on form with the focus on meaning can lead to misunderstandings in the discourse
when learners fail to follow the teacher’s switch of focus. Also, there was no agreement
within the field how the right balance can be achieved between these two components. In
the next decade, the educators started to look for solutions of achieving a right balance
between communicative and conscious orientation of language learning, without
compromising either one, and some educators advanced the idea of computer-based
tutorials as a tool for achieving this balance (e.g., Garrett, 2009; Gilby, 1996; Hubbard &
Siskin, 2004; Nutta, 2004)
The theoretical value of my study is that it brings to light the computer-based
grammar tutorials as a potential solution for the problem of balance. The findings of my
study demonstrate that these tutorials can be effective2 tools for explicit grammar
learning and that most learners appreciate what these tools can afford. In theoretical
terms, the computer-based tutorials can serve as an “explicit jumpstart” (DeKeyser &
Juffs, 2005, p. 442) by offering a systematic view on grammar topics by means of
deductive rule-based explanation. In line with R. Ellis’s (2002) idea of a parallel option
of grammar teaching, when form-focused activities are taken out of an overall meaningoriented instruction, these tutorials can take grammar instruction outside of class, offering
learners an ability to study grammar according to their individual learning preferences.
The value of separately learning grammar material has been already noted in the
literature by Fotos (2002), who suggested that “learners benefit from formal instruction
prior to meaning-focused activities because such instruction helps them activate their
2 In this study, there was no control group. However, during Pilot II (see Chapter 3 for
details), administering a test–retest to a similar group of learners did not produces any evident
differences. It should be noted that this group is not included in this study as a control group
because a different version of the posttest was used. Nevertheless, no apparent changes in the
learning outcomes or test difficulty ratings during simple test–retest and significant changes from
the instructional intervention during this study warrant the conclusion about the effectiveness of
the study tutorials.
200
previous knowledge of the target structures and promotes their attention to the forms they
will encounter” (p. 137).
The field of computer-assisted language learning today is also in a transition state
with regard to grammar instruction. With time, parser-based applications will appear as
the most individualized form of grammar instruction, one that can not only parse the
learners’ responses based on the system of rules in a language, but also adjust the
explanations to the learners’ level. Right now, however, these applications are far from
flawless precision. Similarly, the other main direction in CALL practice—communicative
CALL—falls short of providing language learners with sufficient grammar instruction
because the main focus of the communicative CALL projects is on collaboration and
communication. Thus, the shortcomings of the current orientations in the field prompted
some educators to look back at the computer-based tutorials that were popular at the
beginning of CALL. However, the decade-long developments in language learning and in
CALL disproved the value of the original conceptualizations of the early tutorials that
mostly provided mechanistic practice and constrained feedback. Thus, the educators
today suggest that if the tutorial is to make a comeback, it needs to be reconceptualized.
The new tutorials should focus on raising learners’ consciousness and awareness about
linguistic form by providing explanations and opportunities for practice (Dodigovic,
2005; Garrett, 2009) and be “an extension through time and space of the teaching
presence of its designer” (Hubbard & Siskin, 2004, p. 457).
The theoretical value of this study was that it demonstrated that the tutorial indeed
can teach grammar to learners effectively and that learners appreciate the fact that the
tutorial helped them learn. Most learners in my study suggested that their knowledge
improved after working with the tutorials. Also, other research (Levine, 2003; Schulz,
1996, 2001) demonstrates that learners often connect their knowledge of grammar to how
well they know the language, which is one of the reasons that they expect grammar to be
taught. Teachers, however, sometimes fail to realize or acknowledge their learners’
201
expectation for more grammar instruction. The availability of computer-based grammar
tutorials can successfully ameliorate the discrepancies between the teachers’ and learners’
expectations of grammar instruction and prevent them from having a detrimental effect
on learning.
Practical value of this study
The practical value of this study is that it provides answers to questions that
teachers face if they decide to develop a computer-based grammar tutorial. First, the
study demonstrates that it is worthwhile to create computer-based tutorials because they
can help students improve their knowledge of grammar. Although this study focused only
on the comparison of tutorial effects between modes, the results of the pilot study
demonstrated that grammar instruction with tutorials can be at least comparable, if not
superior, to traditional face–to–face instruction. Further, if teacher–designers face the
question of whether or not to spend time and effort on enhancing the tutorials with
animations or a video recording of a teacher, the findings of my study suggest that it is
not necessary with regard to knowledge improvement, because the content and the
organization of information in the tutorial seem to be more important than the mode of
presentation. However, based on the other results of my study, I suggest that learners’
satisfaction from working with the tutorials will be greater if the presentation manner is
enhanced in some way. If a teacher faces a choice between the two types of enhancement,
the results of this study lead to the recommendation to create a tutorial with animated text
and a voice-over narration, because the analysis of the data suggests that this mode is
supported by the learners’ ratings more consistently.
In Chapter 2, I mentioned that there are no systematic guidelines in CALL for
creating computer-based tutorials of high quality. For this study, I turned to the field of
instructional design and adopted design guidelines from Mayer’s cognitive theory of
202
multimedia learning and related areas of research. In this regard, the practical value of the
study is that it appears to support the application of these guidelines for creating
computer-based grammar tutorials. The analysis of the design and satisfaction
questionnaires in this study demonstrated that most learners liked the design of the
tutorials and appreciated the overall presentation manner.
Finally, the results of this study can be of practical interest for educators and
administrators who are facing a switch of their language classes to distance and hybrid
course offerings. The findings suggest that computer-based grammar tutorials lend
themselves to distance learning. This finding, however, should be interpreted with
caution, because the qualitative analysis of my study revealed that many learners
appreciate such tutorials as a supplement for their in-class learning, but not as its
replacement. A number of participants underscored that they prefer learning with a
teacher because they feel that a teacher can help them in the case of trouble or
misunderstanding much better than a machine could.
Suggestions for future research
The findings of this research warrant more studies with a focus on explicit
grammar instruction in general, and on computer-based grammar tutorials in particular.
For example, the research in SLA and foreign language pedagogy can use computerbased grammar tutorials to investigate empirically R. Ellis’s (2002a) suggestion to
implement grammar instruction as a parallel option, meaning that it is taken out of the
meaning-oriented context of teaching and presented according to its own syllabus. A
study of this type should have longitudinal character to see whether instructing students
in grammar separately from and parallel to communicative teaching indeed increases the
accuracy of the learners’ production and enhances their awareness of grammatical forms.
203
When R. Ellis (2002a, 2002b) advanced this idea, he suggested that the rules in the
parallel option should be presented in an inductive, or discovery-based, way. Thus, it
would be interesting to not only examine the effects of the tutorials in the parallel option,
but also compare the effectiveness of inductive and deductive explanations.
Another fruitful direction for future research stems from replicating this study
with different languages, different target structures, and with different levels of learners.
As described in the section on the limitations of this study, I used the target structures
that may be considered purely grammatical because the application of the rules does not
depend on the context of the sentence. Thus, it would be noteworthy to replicate this
study with other morphological and syntactic rules and extend it to include the less
strictly defined grammar topics. In a similar vein, other studies can replicate my research
by including less constrained measurements of knowledge. It would be interesting to
investigate whether the improvement that I observed from the pre- to the post-test
diminishes with time or whether such an explicit jumpstart increases the learners’ overall
awareness of the forms and in the long run improves accuracy in autonomous production.
Although I found no significant differences of the effect of animations on learning
outcomes, there was some indication that the groups that worked with the animated
tutorials had a slightly higher knowledge improvement (Tables 16 and 17). The issue is
whether this difference in knowledge improvement would be more marked with different
structures or for students at a different level of instruction. Thus, research can continue to
examine the effect of animations. Based on the findings of this study and the results of
the pilot, animations appear to have a beneficial effect on learning, both because they
help learners visualize the rules and because they increase the learners’ engagement with
the tutorials.
Similarly, the field of CALL needs more experimental investigations on the
effects of including recordings of a real teacher in grammar tutorials. As presented in
Chapter 1, this format is quite common in video tutorials offered online on self-
204
publishing websites, such as YouTube.com. Current theory has two opposite predictions
for the recordings of a real teacher. The first is that humans in the recording naturally
attract learners’ attention and create a social bond that, in turn, increases learner
motivation and promotes deeper learning. The second is that because learners tend to
attend to a human, it will distract them from learning and, thus, produce fragmented
processing. The findings of this study seem to bear out both predictions: I found a distinct
split in opinions between those who liked the teacher in the tutorial and felt that the
teacher’s presence helped them learn, and those who found the teacher persona
distracting and, therefore, detrimental to learning. This issue deserves a separate
investigation that could clarify whether we can manipulate the instructional design to
reduce this difference of opinion.
Conclusion
Lightbown and Spada (2006) posed the question “How can classroom instruction
provide the right balance of meaning-based and form-focused instruction?” (p. 180). The
study presented here affords one possible answer to this question: The use of computerbased grammar tutorials could certainly be a viable method of providing such a balance.
Computer-based grammar tutorials can provide learners with opportunities for conscious
learning and help them establish a solid systematic base for language acquisition without
compromising the overall meaning-focused orientation of the language classroom.
Further, this study provides sufficient grounds to believe that learners will welcome
tutorials because such learning tools can be adjusted to fit their individual learning styles.
I started this dissertation by describing a short encounter with a student with
strong opinions about computer-based grammar tutorials. That scene caused in me a
feeling of uncertainty about what kind of outcomes my research would bring. As I write
these final words, I feel that the uncertainty has been replaced by a feeling of
205
accomplishment. I can now state that I have found an answer to the question that framed
my investigation: Are computer-based grammar tutorials effective and welcome tools to
review grammar? The answer is—yes, they are. They are effective because they can help
to improve learners’ knowledge. They are welcomed because they can offer more
opportunities for individualized learning. All things considered, at a time when educators
are looking for new ways to improve learners’ accuracy both in mainstream language
learning and in computer-assisted language learning, computer-based grammar tutorials
have a strong chance to make their comeback as a valuable tool for language learners.
206
REFERENCES
Abraham, R. G. (1985). Field independence–dependence and the teaching of grammar.
TESOL Quarterly, 19(4), 689–702.
Ahmad, K., Corbett, G., Rogers, M., & Sussex, R. (1985). Computers, language learning
and language teaching. London, UK: Cambridge University Press.
Allen, E., & Seaman, J. (2006). Growing by degrees: Online education in the United
States. Retrieved from http://www.sloan–c.org/resources/growing_by_degrees.pdf
Andrews, S. (2007). Teacher language awareness. Cambridge, UK: Cambridge
University Press.
Ayres, P., & Sweller, J. (2005). The split–attention principle in multimedia learning. In
R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 135–146).
New York, NY: Cambridge University Press.
Ayres, R. (2002). Learner attitudes toward the use of CALL. Computer Assisted
Language Learning, 15, 241–249.
Baddeley, A. D. (1992). Working memory. Science, 255, 556–559.
Bade, M. (2008). Grammar and good language learners. In C. Griffiths (Ed.), Lessons
from good language learners (pp. 174–184). Cambridge, UK: Cambridge University
Press.
Baines, L. A., & Stanley, G. (2000). We want to see the teacher: Constructivism and rage
against expertise. The Phi Delta Kappan, 82(4). 327–330.
Bax, S. (2003). CALL—past, present and future. System, 31, 13–18.
Betrancourt, M. (2005). The animation and interactivity principles in multimedia
learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp.
287–296). New York, NY: Cambridge University Press.
Betrancourt, M., & Tversky, B. (2000). Effects of computer animation on users’
performance: A review. Le travail humain, 63, 311–329.
Borg, S. (1999). Teachers’ theories in grammar teaching. ELT Journal, 53(3), 157–167.
Borg, S. (2006). Teacher cognition and language education: Research and practice.
London, UK: Continuum.
Brown, A. V. (2009). Students’ and teachers’ perceptions of effective foreign language
teaching: A comparison of ideals. The Modern Language Journal, 93, 46–60.
Bybee, J. (2008). Usage–based grammar and second language acquisition. In P. Robinson
& N. C. Ellis (Eds.), Handbook of cognitive linguistics and second language
acquisition (pp. 216–236). New York, NY: Routledge.
207
Cajkler, W., & Addelman, R. (2000). The practice of foreign language teaching. London,
UK: David Fulton Publishers.
Caplan, E. A. (2002). The effects of animated textual instruction on learners’ written
production of German modal verb sentences (Doctoral dissertation). Retrieved from
http://etd.fcla.edu/SF/SFE0000042/Caplan2002c.pdf
Carroll, S. (2001). Input and evidence: The raw material of second language acquisition.
Philadelphia, PA: John Benjamins.
Casado, M. A., & Dereshiwski, M. I. (2001). Foreign language anxiety of university
students. College Student Journal, 35(4), 539–549.
Celce-Murcia, M. (2002). Why it makes sense to teach grammar in context and through
discourse. In E. Hinkel & S. Fotos (Eds.), New perspectives on grammar teaching in
second language classrooms (pp. 119–135). Mahwah, NJ: Lawrence Erlbaum
Associates.
Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction.
Cognition and Instruction, 8, 293–332.
Chapelle, C. (2001). Computer applications in second language acquisition: Foundations
for teaching, testing and research. Cambridge, UK: Cambridge University Press.
Chaudron, C. (1988). Second language classroom: Research on teaching and learning.
Cambridge, UK: Cambridge University Press.
Chaudron, C. (1991). What counts as formal language instruction? Problems in
observation and analysis of classroom teaching. Georgetown University Round Table
on Languages and Linguistics 1991 (pp. 56–64). Washington, D.C.: Georgetown
University Press.
Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational
Psychology Review, 3, 149–210.
Cook, V. J., & Fass, D. (1986). Natural language processing by computer and language
teaching. System, 14(2), 163–170.
Corbeil, G. (2007). Can PowerPoint presentations effectively replace textbooks and
blackboards for teaching grammar? Do students find them an effective learning tool?
CALICO Journal, 24(3), 631–656.
Cotterall, S. (1999). Key variables in language learning: what the learners believe about
them. System, 27(4), 493–513.
Creswell, J. (2009). Research design: Qualitative, quantitative and mixed methods
approaches. Thousand Oaks, CA: Sage Publications Inc.
DeKeyser, R. M. (Ed.). (2007). Practice in a second language: Perspectives from applied
linguistics and cognitive psychology. New York, NY: Cambridge University Press.
DeKeyser, R. M. (1993). The effect of error correction on L2 grammar knowledge and
oral proficiency. Modern Language Journal, 77, 501–514.
208
DeKeyser, R. M. (1997). Beyond explicit rule learning. Studies in Second Language
Acquisition, 19, 195–221.
DeKeyser, R. M. (1998). Beyond focus on form: Cognitive perspectives on learning and
practicing second language grammar. In C. Doughty & J. Williams (Eds.), Focus on
form in classroom second language acquisition (pp. 42–63). New York, NY:
Cambridge University Press.
DeKeyser, R. M. (2005). What makes learning second–language grammar difficult? A
review of issues. Language Learning, 55(1), 1–25.
DeKeyser, R. M. (2010). Monitoring processes in Spanish as a second language during a
study abroad program. Foreign Language Annals, 43(1), 80–92.
DeKeyser, R., & Juffs, A. (2005). Cognitive considerations in L2 learning. In E. Hinkel
(Ed.), Handbook of research in second language teaching and learning (pp. 437–
454). Mahwah, NJ: Lawrence Erlbaum.
Dini, L., & Malnati, G. (1993). Weak constraints and preference rules. In P. Bennet & P.
Paggio (Eds.), Preference in Eurotra (pp. 75–90). Luxembourg: Commission of the
European Communities.
Dodigovic, M. (2005). Artificial intelligence in second language learning: Raising error
awareness. Clevedon, UK: Multilingual Matters.
Dörnyei, Z. (2009). The psychology of second language acquisition. Oxford, UK: Oxford
University Press.
Doughty, C. (2003). Instructed SLA: Constraints, compensation, and enhancement. In C.
Doughty & M. H. Long (Eds.), Handbook of second language acquisition (pp. 256–
310). Oxford, UK: Blackwell.
Ellis, N. C. (2003). Rules and instances in foreign language learning: Interactions of
implicit and explicit knowledge. European Journal of Cognitive Psychology, 5(3),
289–319.
Ellis, N. C. (2005). At the interface: Dynamic interactions of explicit and implicit
language knowledge. Studies in Second Language Acquisition, 27, 305–352.
Ellis, N. C. (2008). Usage–based and form-focused language acquisition: The associative
learning of constructions, learned attention, and the limited L2 end state. In P.
Robinson & N. C. Ellis (Eds.), Handbook of cognitive linguistics and second
language acquisition (pp. 372–236), London, UK: Routledge.
Ellis, R. (1998). Teaching and research: Options in grammar teaching. TESOL Quarterly,
32(1), 39–60.
Ellis, R. (2001). Investigating form-focused instruction. In R. Ellis (Ed.), Form-focused
instruction and second language learning (pp.1–46). Malden, MA: Blackwell
Publishers.
209
Ellis, R. (2002a). The place of grammar instruction in the second/foreign language
curriculum. In E. Hinkel & S. Fotos (Eds.), New perspectives on grammar teaching in
second language classrooms (pp. 17–35). Mahwah, NJ: Lawrence Erlbaum
Associates.
Ellis, R. (2002b). Methodological options in grammar teaching materials. In E. Hinkel &
S. Fotos (Eds.), New perspectives on grammar teaching in second language
classrooms (pp. 155–181). Mahwah, NJ: Lawrence Erlbaum Associates.
Ellis, R., Basturkmen, H., & Loewen, S. (2001). Learner uptake in communicative ESL
lessons. Language Learning, 51, 281–318.
Ellsworth, P. C. (1975). Direct gaze as a social stimulus: The example of aggression. In
P. Pilner, L . Krames, & T. Alloway (Eds.), Nonverbal communication of aggression
(pp. 53–75). New York, NY: Plenum.
Erlam, R. (2003). The effects of deductive and inductive instruction on the acquisition of
direct object pronouns in French as a second language. The Modern Language
Journal, 87(2), 242–260.
Ewald, J. (2007). Foreign language anxiety in upper–level classes: Involving students as
researchers. Foreign Language Annals, 40(1), 122–142.
Felix, U. (1997). Integrating multimedia into the curriculum: A case study evaluation,
OnCALL, 11(1), 2–11
Felix, U. (2005). Analysing recent CALL effectiveness—toward a common agenda.
Computer Assisted Language Learning, 18(1–2), 1–32.
Field, A. (2009). Discovering statistics using SPSS. London, UK: Sage Publications.
Fletcher, J. D., & Tobias, S. (2005). The multimedia principle. In R. E. Mayer (Ed.), The
Cambridge handbook of multimedia learning (pp. 117–134). New York, NY:
Cambridge University Press.
Frederiksen, C. H., Donin, J., & Decary, M. (1995). A discourse processing approach to
computer–assisted language learning. In V. M. Holland, J. D. Kaplan, & M. R. Sams
(Eds.), Intelligent language tutors: Theory shaping technology (pp. 99–121).
Mahwah, NJ: Lawrence Erlbaum Associates.
Fotos, S.(2002). Structure-based interactive tasks for the EFL grammar learners. In E.
Hinkel & S. Fotos (Eds.), New perspectives on grammar teaching in second language
classrooms (pp. 135–154). Mahwah, NJ: Lawrence Erlbaum Associates.
Garrett, N. (1995). ICALL and second language acquisition. In V. M. Holland, J. D.
Kaplan, & M. R. Sams (Eds.), Intelligent language tutors: Theory shaping technology
(pp. 345–359). Mahwah, NJ: Lawrence Erlbaum Associates.
Garrett, N. (2009). Computer–assisted language learning trends and issues revisited:
Integrating innovation. The Modern Language Journal, 93, 719–740.
Gilby, W. (1996). Irrwege des Zweitsprachenerwerbs: Gehört auch das Computerlabor
dazu? Die Unterrichtspraxis / Teaching German, 29(1), 87–91.
210
Ginns, P. (2005). Meta–analysis of the modality effect. Learning and Instruction, 15,
313–331.
Gollin, J. (1985). Key concepts in ELT: Deductive vs. inductive language learning. ELT
Journal, 52(1), 88.
Haight, C. E., Herron, C., & Cole, S. P. (2007). Approaches on the learning of grammar
in the elementary foreign language college classroom. Foreign Language Annals,
40(2), 288–310.
Hall, J. K. (1995). Aw man, where you goin’?: Interaction and the development of L2
interactional competence. Issues in Applied Linguistics, 6, 37–62.
Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory of
cognitive interest in science learning. Journal of Educational Psychology, 90, 414–
434.
Healey, D. (1999). Classroom practice: Communicative skill–building tasks in CALL
environments. In J. Egbert & E. Hanson–Smith (Eds.), CALL environments:
Research, practice, and critical issues (pp. 116–136). Alexandria, VA: TESOL.
Hegarty, M., 2004. Diagrams in the mind and in the world: relations between internal and
external visualizations. In A. Blackwell, K. Marriot, & A. Shimojima (Eds.),
Diagrams (pp. 1–13). Berlin, Germany: Springer–Verlag.
Heift, T. (2001). Error–specific and individualized feedback in a web–based language
tutoring system: Do they read it? ReCALL, 13(1), 99–109.
Heift, T. (2004). Corrective feedback and learner uptake in CALL. ReCALL, 16(2), 416–
431.
Heift, T. (2006). Context–sensitive help in CALL. Computer Assisted Language
Learning, 19(2–3), 243–259.
Heift, T., & Rimrott, A. (2008). Learner responses to corrective feedback for spelling
errors in CALL. System, 36, 196–231.
Heift, T., & Schulze, M. (2007). Errors and intelligence in computer–assisted language
learning: Parsers and pedagogues. New York, NY: Routledge.
Heift, T., & Toole, J. (2002). The Tutor Assistant: An authoring tool for an intelligent
language tutoring system. Computer Assisted Language Learning, 15(4), 373–386.
Herron, C., & Tomasello, M. (1992). Acquiring grammatical structures by guided
instruction. The French Review, 65, 705–718.
Higgs, T., & Clifford, R. (1982). The push toward communication. In T. Higgs (Ed.),
Curriculum, competence, and the foreign language teacher (pp. 57–59). Skokie, IL:
National Textbook.
Hinkel, E., & Fotos, S. (Eds.). (2002). New perspectives on grammar teaching in second
language classrooms. Mahwah, NJ: Lawrence Erlbaum Associates.
211
Holland, M. V., & Kaplan, J. D. (1995). Natural language processing techniques in
computer–assisted language learning: Status and instructional issues. Instructional
Science, 23, 351–380.
Holland, M. V., Maisano, R., Alderks, C., & Martin, J. (1993). Parsers in tutors: What
are they good for? CALICO Journal, 11(1), 28–46.
Horwitz, E. K. (2001). Language anxiety and achievement. Annual Review of Applied
Linguistics: Language and Psychology, 21, 112–126.
Hubbard, P., & Siskin, C. B. (2004). Another look at tutorial CALL. ReCALL, 16(2),
448–461.
Hudson, R., & Walmsley, J. (2005). The English patient: English grammar and teaching
in the twentieth century. Linguistics, 41, 593–622.
Hulstijn, J. H. (2002).Towards a unified account of the representation, processing, and
acquisition of second–language knowledge. Second Language Research, 18(3), 193–
223.
Jarvis, H., & Szymczyk, M. (2010). Student views on learning grammar with web- and
book-based materials. ELT Journal, 64(1), 32–44.
Jones, J. F. (2001). CALL and the responsibilities of teachers and administrators. ELT
Journal, 55(4), 360–367.
Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split–attention and redundancy
in multimedia instruction. Applied Cognitive Psychology, 13, 351–371.
Katz, S. L., & Blyth, C. S. (2009). What is grammar? In S. Katz & J. Watzinger–Tharp
(Eds.), Conceptions of L2 grammar: Theoretical approaches and their application in
the L2 classroom, (pp. 2–15). Boston, MA: Heinle Cengage Learning.
Keller, J., & Burkman, E. (1992). Motivation principles. In M. Fleming & W. H. Levie
(Eds.), Instructional message design: Principles form the behavioral and cognitive
sciences (pp. 3–54). Englewood Cliffs, NJ: Educational Technology Publications.
Kelly, L. G. (1969). 25 centuries of language teaching. Rowley, MS: Newbury House
Publishers.
Kleinke, C. L., & Pohlen, P. D. (1971). Affective and emotional responses as a function
of other person's gaze and cooperativeness in a two–person game. Journal of
Personality and Social Psychology, 17, 308–313.
Koch, A., & Terrell, T.D. (1991). Affective reactions of foreign language students to
Natural Approach activities and teaching techniques. In E. K. Horwitz & D. J. Young
(Eds.), Language anxiety: From theory and research to classroom implications (pp.
109–126). Englewood Cliffs, NJ: Prentice Hall.
Kolesnikova, A. (2011, April). Teaching grammar on computers: an alternative to in–
class grammar instruction? Paper presented at The 2011 SLA Graduate Student
Symposium, Iowa City, IA.
212
Krashen, S. D. (1981). Second language acquisition and second language learning.
Oxford, UK: Pergamon Press.
Krashen, S. D. (1985). The input hypothesis: Issues and implications. London, UK:
Longman.
Krashen, S. D. (1992). Formal grammar instruction. Another educator comments. TESOL
Quarterly, 26(2), 409–411.
Krashen, S. D. (2002). Explorations in language acquisition and use: The Taipei
lectures. Taipei, Taiwan: Crane.
Krashen, S. D., & Seliger, H. (1975). The essential characteristics of formal instruction.
TESOL Quarterly, 9, 173–183.
Krashen, S. D., & Terrell, T. D. (1983). The Natural Approach: Language acquisition in
the classroom. Trowbridge, Wiltshire: Redwood Burn Limited.
Larsen-Freeman, D. (2002). The grammar of choice. In E. Hinkel & S. Fotos (Eds.), New
perspectives on grammar teaching in second language classrooms (pp. 99–103).
Mahwah, NJ: Lawrence Erlbaum Associates.
Larsen-Freeman, D. (2003). Teaching language: From grammar to grammaring. Boston,
MA: Heinle.
Lee, W. (1996). The role of materials in the development of autonomous learning. In: R.
Pemberton, E. S. L. Li, W.W.F. Or, & H.D. Pierson, (Eds.), Taking control:
Autonomy in language learning (pp 167–184). Hong Kong: Hong Kong University
Press.
Leech, G. (1994). Students’ grammar—teachers’s grammar—learners’ grammar. In M.
Bygate, A. Tonkyn, & E. Williams (Eds.), Grammar and the language teacher (pp
17–31). London, UK: Prentice Hall.
Leow, R. P. (2007). Input in the L2 classroom: An attentional perspective on receptive
practice. In R. M. DeKeyser (Eds.), Practice in a second language: Perspectives from
applied linguistics and cognitive psychology (pp. 21–50). New York, NY: Cambridge
University Press.
Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stone, B. A., & Bhogal,R. S.
(1997). The persona effect: Affective impact of animated pedagogical agents. In
Proceedings of CHI (pp. 359–366). Retrieved from
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.91.8779
Levin, L. (1972). Comparative studies in foreign–language teaching. Stockholm,
Sweden: Almqvist & Wiksell.
Levine, G. S. (2003). Student and instructor beliefs and attitudes about target language
use, first language use, and anxiety: Report of a questionnaire study. The Modern
Language Journal, 87(3), 343–364.
Levy, M. (1997). Computer–assisted language learning: Context and conceptualization.
Oxford, UK: Clarendon.
213
Levy, M., & Stockwell, G. (2006). CALL dimensions: Options and issues in computer–
assisted language learning. Mahwah, NJ: Lawrence Erlbaum.
Lewalter, D. (2003). Cognitive strategies for learning from static and dynamic visuals.
Learning and Instruction, 13, 177–189.
Lightbown, P. M., & Spada, N. (2006). How languages are learned. Oxford, UK: Oxford
University Press.
Lightbown, P., & Spada, N. (1990). Focus on form and corrective feedback in
communicative language teaching: Effects on second language learning. Studies in
Second Language Acquisition, 12(4), 429–448.
Loewen, S., Li, S., Fei, F., Thompson, A., Nakatsukasa, K., Ahn, S., & Chen, X. (2009).
Second language learner’s beliefs about grammar instruction and error correction.
The Modern Language Journal, 93(1), 91–104.
Long, M. H. (1988). Instructed interlanguage development. In L. Beeby (Ed.), Issues in
second language acquisition: Multiple perspectives (pp. 115–141). Rowley, MA:
Newbury House.
Long, M. H. (1990). Maturational constraints on language development. Studies in
Second Language Acquisition, 12, 251–285.
Long, M. H. (1991). Focus on form. A design feature in language teaching methodology.
In K. de Bot, R. Ginsberg, & C. Kramsch (Eds.), Foreign language research in
cross–cultural perspective (pp. 39–52). Amsterdam: John Benjamins.
Long, M. H. (1996). The role of the linguistic environment in second language
acquisition. In W. Ritchie & T. Bhatia (Eds.), Handbook of second language
acquisition (pp. 413–468). San Diego, CA: Academic Press.
Lovik, T. A., Guy, J. D., & Chavez, M. (2006). Vorsprung: A Communicative
introduction to German language and culture. Boston, MS: Houghton Mifflin.
Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In R. E.
Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 147–158). New
York, NY: Cambridge University Press.
Lowe, R. K. (1999). Extracting information from an animation during complex visual
learning. European Journal of Psychology of Education, 14(2), 225–244.
Lowe, R. K. (2003). Animation and learning: Selective processing of information in
dynamic graphics. Learning and Instruction, 13, 157–176.
Lowe, R. K. (2004). Interrogation of a dynamic visualization during learning. Learning
and Instruction, 14, 257–274.
Luke, C. (2006). Fostering learner autonomy in a technology–enhanced, inquiry–based
foreign language classroom. Foreign Language Annals, 39(1), 71–86.
Lyster, R. (2007). Learning and teaching languages through content: A counterbalanced
approach. Philadelphia, PA: John Benjamins Publishing Company.
214
Mackey, A. (1999). Input, interaction, and second language development. Studies in
Second Language Acquisition, 21, 557–587.
Mackey, A., Gass, S. M., & McDonough, K. (2000). How do learners perceive
interactional feedback? Studies in Second Language Acquisition, 22, 471–497.
Mautone, P. D., & Mayer, R. E. (2001). Signaling as a cognitive guide in multimedia
learning. Journal of Educational Psychology, 93, 377–389.
Mayer, R. E. (1996). Learning strategies for making sense out of expository text: the SOI
model for guiding three cognitive processes in knowledge construction. Educational
Psychology Review, 8, 357–371.
Mayer, R. E. (1999). Multimedia aids to problem solving transfer. International Journal
of Educational Research, 31, 611–623.
Mayer, R. E. (2003). The promise of multimedia learning: Using the same instructional
design methods across different media. Learning and Instruction, 13, 125–139.
Mayer, R. E. (2005a). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The
Cambridge handbook of multimedia learning (pp. 31–48). New York, NY:
Cambridge University Press.
Mayer, R. E. (2005b). Principles for reducing extraneous processing in multimedia
learning: Coherence, signaling, redundancy, spatial contiguity, and temporal
contiguity principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia
learning (pp. 183–200). New York, NY: Cambridge University Press.
Mayer, R. E. (2005c). Principles for managing essential processing in multimedia
learning: Segmenting, pretraining, and modality principles. In R. E. Mayer (Ed.), The
Cambridge handbook of multimedia learning (pp. 169–182). New York, NY:
Cambridge University Press.
Mayer, R. E. (2005d). Principles of multimedia learning based on social cues:
Personalization, voice , and image principles. In R. E. Mayer (Ed.), The Cambridge
handbook of multimedia learning (pp. 201–214). New York, NY: Cambridge
University Press.
Mayer, R. E. (Ed.). (2005). The Cambridge handbook of multimedia learning. New York,
NY: Cambridge University Press.
Mayer, R. E., & Anderson, R. B. (1992). The instructive animation: Helping students
build connections between words and pictures in multimedia learning. Journal of
Educational Psychology, 84, 444–452.
Mayer, R. E., & Moreno, R. (1998). A cognitive theory of multimedia learning:
Implications for design principles. Retrieved from
http://www.unm.edu/~moreno/PDFS/chi.pdf
Mayer, R. E., & Moreno, R. (2002a). Aids to computer-based multimedia learning.
Learning and Instruction, 12, 107–119.
Mayer, R. E., & Moreno, R. (2002b). Animation as an aid to multimedia learning.
Educational Psychology Review, 14(1), 87–99.
215
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia
learning. Educational Psychologist, 38, 43–53.
Mayer, R. E., & Sims, V. K. (1994). For whom is a picture worth a thousand words?
Extensions of a dual–coding theory of multimedia learning. Journal of Educational
Psychology, 86, 389–401.
Mayer, R. E., Fennell, S., Farmer L., & Campbell, J. (2004). A personalization effect in
multimedia learning: Students learn better when words are in conversational style
rather than formal style. Journal of Educational Psychology, 96, 398–395.
McCarthy, M. J., & Carter, R. A. (2002). Ten criteria for a spoken grammar. In E. Hinkel
& S. Fotos (Eds.), New perspectives on grammar teaching in second language
classrooms (pp. 51–76). Mahwah, NJ: Lawrence Erlbaum Associates.
Moreno, R. (2005). Multimedia learning with animated pedagogical agents. In R. E.
Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 507–524). New
York, NY: Cambridge University Press.
Moreno, R., & Mayer, R. E. (2000). Engaging students in active learning: The case for
personalized multimedia messages. Journal of Educational Psychology, 92, 724–733.
Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social
agency in computer-based teaching: Do students learn more deeply when they
interact with animated pedagogical agents? Cognition and Instruction, 19, 177–213.
Morrison, G. R., Ross, S. M., & Kemp, J. E. (2001). Designing effective instruction. New
York, NY: John Wiley.
Nagata, N. (1993). Intelligent computer feedback for second language instruction.
Modern Language Journal, 77(3), 330–338.
Nagata, N. (1995). An effective application of natural language processing in second
language instruction. CALICO Journal, 13(1), 47–67.
Nagata, N. (1996). Computer vs. workbook instruction in second language acquisition.
CALICO Journal, 14, 53–75.
Nagata, N. (1997a). The effectiveness of computer–assisted metalinguistic instruction: A
case study in Japanese. Foreign Language Annals, 30, 187–200.
Nagata, N. (1997b). An experimental comparison of deductive and inductive feedback
generated by a simple parser. System, 25, 515–534.
Nassaji, H., & Fotos, S. (2004). Current developments in research on the teaching of
grammar. Annual Review of Applied Linguistics, 24, 126–145.
Negueruela, E. (2009). A conceptual approach to promoting L2 grammatical
development: Implications for language program directors. In S. Katz & J.
Watzinger-Tharp (Eds.), Conceptions of L2 grammar: Theoretical approaches and
their application in the L2 classroom (pp. 151–172). Boston, MA: Heinle Cengage
Learning.
216
Nerbonne, J. A. (2003). Computer–assisted language learning and natural language
processing. In R. Mitkov (Ed.), The Oxford handbook of computational linguistics
(pp. 670–698). Oxford, UK: Oxford University Press.
Norris, J. M., & Ortega, L. (2000). Effectiveness of L2 instruction: A research synthesis
and quantitative meta–analysis. Language Learning, 50, 417–528.
Norris, J. M., & Ortega, L. (2001). Does type of instruction make a difference?
Substantive findings from a meta–analytic review. In R. Ellis (Ed.), Form-focused
instruction and second language learning (pp.157–213). Malden, MA: Blackwell
Publishers.
Nutta, J. (2004). Is computer-based grammar instruction as effective as teacher–directed
grammar instruction for teaching L2 structures? CALICO Journal, 16(1), 49–62.
Oxford, R. L. (1995). Linking theories of learning with intelligent computer–assisted
language learning (ICALL). In V. M. Holland, J. D. Kaplan, & M. R. Sams (Eds.),
Intelligent language tutors: Theory shaping technology (pp. 359–370). Mahwah, NJ:
Lawrence Erlbaum Associates.
Paivio, A. (1986). Mental representations: A dual coding approach. Oxford, UK: Oxford
University Press.
Park, O. & Hopkins, R. (1993). Instructional conditions for using dynamic visual
displays: A review. Instructional Science, 21, 427–449.
Park, S., & Catrambone, R. (2007). Social facilitation effects of virtual humans. Human
Factors, 49(6), 1054–1060.
Pederson, K. M. (1987). Research on CALL. In Wm. F. Smith, (Ed.), Modern media in
foreign language education: A synopsis (pp. 99–133). Lincolnwood, IL: National
Textbook Company.
Pennington, M. C. (2002). Grammar and communication: New directions in theory and
practice. In E. Hinkel & S. Fotos (Eds.), New perspectives on grammar teaching in
second language classrooms (pp. 77–98). Mahwah, NJ: Lawrence Erlbaum
Associates.
Pusack, J. P., & Otto, S. E. K. (1984). Blueprint for a comprehensive foreign language
CAI curriculum. Computers and Humanities, 18, 195–204.
Ranta, L., & Lyster, R. (2007). A cognitive approach to improving immersion students’
oral language abilities: The awareness–practice–feedback sequence. In R. DeKeyser
(Ed.), Practicing for second language use: Perspectives from applied linguistics and
cognitive psychology (pp. 141–160). Cambridge, UK: Cambridge University Press.
Reeves, B., and Nass, C. (1998). The media equation: How people treat computers,
television and new media like real people and places. New York, NY: CSLI.
Richards, J. C. (2002). Accuracy and fluency revisited. In E. Hinkel & S. Fotos (Eds.),
New Perspectives on grammar teaching in second language classrooms (pp. 35–51).
Mahwah, NJ: Lawrence Erlbaum Associates.
217
Rieber, L. P. (1990). Animation in computer-based instruction. Educational Technology
Research and Development, 38(1), 77–86.
Rieber, L. P., Tzeng, S., Tribble, K., & Chu, G. (1996). Feedback and elaboration within
a computer-based simulation: A dual coding perspective. AERA Conference.
Retrieved from http://it.coe.uga.edu/~lrieber/Rieber-AERA-1996.pdf
Roberts, W. E. (2009). The use of cues in multimedia instruction in technology as a way
to reduce cognitive load. (Doctoral dissertation).
Robinson, P. (1996). Learning simple and complex second language rules under implicit,
incidental, rule–search and instructed conditions. Studies in Second Language
Acquisition, 18(1), 27–68.
Roche, J., & Scheller, J.(2008). Grammar animations and cognition. In B. Barber & F.
Zhang (Eds.), Handbook of research on computer enhanced language acquisition and
learning (pp. 205–218). Hershey, PA: Information Science Reference.
Rosa, E., & Leow, R. P. (2004). Computerized task–based instruction in the L2
classroom: The effects of explicitness and type of feedback on L2 development.
Modern Language Journal, 19, 223–247.
Rosa, E., & O’Neill, D. (1999). Explicitness, intake, and the issue of awareness: Another
piece to the puzzle. Studies in Second Language Acquisition, 21, 511–556.
Sanders, R. (2005). Redesigning introductory Spanish: Increased enrollment, online
management, cost reduction, and effects on student learning. Foreign Language
Annals, 38(4), 523–532.
Schaffer, C. (1989). A comparison of inductive and deductive approaches to teaching
foreign languages. The Modern Language Journal, 73(4), pp. 395–403
Schirmer, A., Simpson, E., & Escoffier, N. (2007). Listen up! Processing of intensity
change differs for vocal and non–vocal sounds. Brain Research, 1176, 103–112.
Schmidt, R. (1990). The role of consciousness in second language learning. Applied
Linguistics, 11, 17–46.
Schnotz, W. (2002). Enabling, facilitating, and inhibiting effects in learning from
animated pictures. Retrieved from
http://www.iwmkmrc.de/workshops/visualization/schnotz.pdf
Schulz, R. A. (1996). Focus on form in the foreign language classroom: Students’ and
teachers’ views on error correction and the role of grammar. Foreign Language
Annals, 29, 343–364.
Schulz, R. A. (2001). Differences in student and teacher perceptions concerning the role
of grammar instruction and corrective feedback: USA – Colombia. The Modern
Language Journal, 85(2), 244–258.
218
Schwier, R. A., & Misanchuk, E. R. (1995, February). The art and science of color in
multimedia screen design, part I: Art, opinion, and tradition. Paper presented at the
Annual conference of the association for educational communications and
technology, Anaheim, CA. Retrieved from
http://students.ced.appstate.edu/newmedia/06fallcohort/walters/ci5200/Module1/stud
yone.pdf
Seliger, H. (1975) Inductive method and deductive method in language teaching: a
reexamination. International Review of Applied Linguistics, 13, 1–18.
Sheen, R. (1992). Problem solving brought to task. RELC Journal, 23, 44–59.
Skehan, P. (1996). A framework for the implementation of task–based instruction.
Applied Linguistics, 17(1), 38–62.
Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard
Educational Review, 24, 86–97.
Sokhi, D. S., Hunter, M. D., Wilkinson, I. D., & Woodruff, P. W. R. (2005). Male and
female voices activate distinct regions in the male brain. NeuroImage, 27, 572–578.
Spada, N., & Lightbown, P. M. (2008). Form-focused instruction: Isolated or integrated?
TESOL Quarterly, 42(2), 181–207.
Swain, M., & Lapkin, S. (1989). Canadian immersion and adult second language
teaching: What’s the connection? Modern Language Journal, 73,150–159.
Sweller, J. (1999). Instructional design in technical areas. Camberwell, Australia: ACER
Press.
Sweller, J. (2005). Implications of cognitive load theory for multimedia learning. In R. E.
Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 19–30). New
York, NY: Cambridge University Press.
Sykes, J., Oskoz, A., & Thorne, S. (2008). Web 2.0, synthetic immersive environments,
and mobile resources for language education. CALICO Journal, 25, 528–546.
Tabbers, H. K., Martens, R. L., & van Merriënboer, J. J. G. (2001). The modality effect
in multimedia instructions. In J. D. Moore & K. Stennings (Eds.), Proceedings of the
twenty–third annual conference on the cognitive science society (pp. 1024–1029).
Mahwah, NJ: Lawrence Erlbaum.
Taylor, H. F. (1979). DECO/TUCO: A tutorial approach to elementary German
instructions (Students’ reactions to computer assisted instruction in German). Foreign
Language Annals, 12(4), 289–291.
Terrell, T., Tschirner, E., & Nikolai, B. (2004). Kontakte: A communicative approach.
New York, NY: McGraw–Hill.
Toth, P. (2004). When grammar instruction undermines cohesion in L2 Spanish
classroom discourse. Modern Language Journal, 88, 14–30.
Tse, L. (2000). Student perceptions of foreign language study: A qualitative analysis of
foreign language autobiographies. The Modern Language Journal, 84(1), 69–84.
219
Tsui, A. B. M. (2003). Understanding expertise in teaching: Case studies of second
language teachers. New York, NY: Cambridge University Press.
Tversky, B., Morrison, J. B., & Betrancourt, M. (2002). Animation: Can it facilitate?
International Journal of Human–Computer Studies, 57, 247–262.
Ullman, M. (2005). A cognitive neuroscience perspective on second language
acquisition: The declarative/procedural model. In C. Sanz (Ed.), Mind and context in
adult second language acquisition (pp. 141–177). Washington, D.C.: Georgetown
University Press.
VanPatten, B. (1989). Can learners attend to form and content while processing input?
Hispania, 72, 409–417.
VanPatten, B. (1996). Input processing and grammar instruction: Theory and research.
Norwood, NJ: Ablex Publishing Corporation.
VanPatten, B. (2002). Processing instruction: An update. Language Learning, 52(4),
755–803.
Velthuijsen, A., Hooijkaas, C., & Koomen, W. (1987). Image size in interactive
television and evaluation of the interaction. Social Behavior, 2, 113–118.
Vygotsky, L. S. (2001). Myshlenie i rech. Moscow, Russia: Labyrint.
Warschauer, M. (1996). Computer–assisted language learning: an introduction. In S.
Fotos (Ed.), Multimedia language teaching (pp. 3–20). Tokyo, Japan: Logos.
Warschauer, M. (2005). Sociocultural perspectives on CALL. In J. L. Egbert & G. M.
Petrie (Eds.), CALL research perspectives (pp. 41–52). Mahwah, NJ: Lawrence
Erlbaum Associates.
Williams, J. (1995). Focus on form in the communicative classroom: Research findings
and the classroom teacher. TESOL Journal,4, 12–16.
Winn, W. (1992). Perception Principles. In M. Fleming & W. H. Levie (Eds.),
Instructional message design: Principles form the behavioral and cognitive sciences
(pp. 55–126). Englewood Cliffs, NJ: Educational Technology Publications.
Wittrock, M.C. (1990). Generative processes in reading comprehension. Educational
Psychologist, 24, 354–376.
Wolff, R. S. (1993). Multimedia in the classroom and the laboratory. Computers in
Physics, 7(4), 426–442.
Wouters, P., Paas, F., & van Merrienboer J. J. G. (2008). How to optimize learning from
animated models: A review of guidelines based on cognitive load. Review of
Educational Research, 78(3), 645–675.
Zajonc, R. B. (1965). Social facilitation. Science, 149, 269–274.
Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker Jr., J. F. (2004). Can e–learning replace
classroom learning? Communications of the ACM, 47(5), 75–79.
220
APPENDIX
XA
STUDY IN
NFORMAT
TION SHEE
ET
221
APPENDIX B
SCRIPT FOR THE VC TUTORIALS
Regular verb conjugation
Welcome to our tutorial on regular verb conjugation in German.
Next slide
There are different types of verbs in German, including: weak, strong, and
modal.
Next slide
Today we will focus on the weak verbs, or – as they are also called –
regular verbs.
Next slide
They are called regular because they follow a regular pattern in their
conjugations. Once you learn this pattern for one regular verb, you can
apply it for all regular verbs.
Next slide
First, let’s review the personal pronouns in German.
Ich means I
Du means you
Er means he
Sie means she
Es means it
wir means we
Ihr means you all
And capitalized Sie means you in formal situations
Next slide
Please note that er, sie, and es are usually grouped together in the
conjugation tables because they have the same conjugation ending.
Sie forms are also grouped together because they too have the same
ending.
Next slide
Now let’s see how to conjugate a regular verb step by step. Let’s take the
verb lernen as an example.
Next slide
First, you need to remove the infinitive ending –en from the verb. What is
left is called a verb stem.
20 seconds not text
Next slide
You should keep the verb stem for all conjugated forms.
Next slide
Now, you need to add the personal endings to the stem.
E for ich
222
St for du
T for er/sie/es
En for wir
T for ihr
And en for sie.
Next slide
Now we have ich lerne, du lernst, er/sie/es lernt, wir lernen, ihr lernt, sie
lernen.
Next slide
You can remember the endings as EST-TEN-TEN combination.
Next slide
Now you try to conjugate a regular verb. Take 20 seconds and conjugate
the verb spielen.
20 seconds not text
Next slide
Now, let’s check together: ich spiele, du spielst, er/sie/es spielt, wir
spielen, ihr spielt, sie spielen.
Next slide
But this is not all you need to know about regular verb conjugation in
German. Certain letters at the end of the verb stem can change some of the
personal endings.
Next slide
When a verb stem ends in –s, -ss, -ß, -z, there is a slight change in the
personal ending for the du form.
You remember that the personal ending for du is st. However, letters –s, ss, -ß, -z, already give us the s sound. – You want it to sound nice and
easy so you should add only the letter t as a personal ending for du here.
Next slide
Let’s take a look at the verb ‘heißen’ as an example. First, you drop the
ending –en. Then you add regular endings, but remember that the ending
for du is –t, not –st.
Ich heiße, du heißt, er\sie\es heißt, wir heißen, ihr heißt, sie heißen.
Next slide
Now it’s your turn – please conjugate the verb reisen
Next slide
Let’s check together: ich reise, du reist, er/sie/es reist, wir reisen ihr reist,
sie reisen.
20 seconds not text
Next slide
There is one more rule you need to remember for the German verb
conjugation. You have just seen how certain letters can change the ending
by removing a letter. Now you will learn about the letters in the verb stem
that change the ending by adding a letter.
Next slide
When a verb stem ends in –t, or –d, we need to add an additional letter –e
in front of the regular endings for the du, er, sie, es and ihr pronouns.
223
Next slide
Let’s take a look at the verb arbeiten.
As usual, first you drop the –en ending and then add the regular endings,
remembering to add an additional –e in front of the -st ending for du and -t
endings for er/sie/es and ihr.
Ich arbeite, du arbeitEst, er/sie/es arbeitEt, wir arbeiten, ihr arbeitEt,
sie/sie arbeiten.
Next slide
Now your turn. Please conjugate the verb finden:
20 seconds not text
Next slide
Let’s check. Ich finde, du findEst, er/sie/es findEt, wir finden, ihr findEt,
sie finden.
Next slide
The same rule about an additional –e in front of –st and –t endings, applies
to another group of regular verbs that have a stem end in the following
letter combinations: ffn, gn, chn, dm
Get an additional –e in front of the –st ending for du and –t endings for
er/sie/es and ihr.
Next slide
Let’s take a look at the verb öffnen as an example. To conjugate it, drop
the ending –en and add regular endings with an additional –e in front of
the –st and –t endings.
Ich öffne, du öffnest, er/sie/es öffnet, wir öffnen, ihr öffnet, sie öffnen.
Next slide
Now your turn: conjugate the verb rechnen
20 seconds not text
Next slide
Let’s check together: ich rechne, du rechnest, er/sie/es rechnet, wir
rechnen, ihr rechnet, sie/Sie rechnen.
Next slide
Let’s recap. To conjugate a regular verb, you should first drop the ending
–en and then add personal endings E for ich, ST for du, T for er/sie/es, EN
for wir, T for ihr, and EN for sie.
If – after you drop the –en ending- you see that the verb stem ends in –s, ss, ß, z - remember that the ending for Du is T, not ST.
If you see that the verb stem ends in –T or D or consonant cluster ffn, gn,
chn, dm –add an additional –E in front of –ST ending for Du and T ending
for er/sie/es and ihr.
Next slide
This concludes our tutorial. We hope that it helped you to remember the
rules for the regular verb conjugation in German. Tschuss!
224
APPENDIX C
SCRIPT FOR THE SPV TUTORIALS
Separable-prefix verbs and two-verb constructions
Welcome to our tutorial on German separable-prefix verbs and two-verb constructions.
Next slide
German separable-prefix verbs are similar to some English verbs.
For example, to get up means aufstehen in German and ‘to go out’ means ausgehen.
Next slide
Various prefixes can be added to same verb and that will change its meaning.
For example, if you add prefixes AN, AUS and UM to the verb ziehen you will get 3
different verbs:
Anziehen means to put clothes on,
Ausziehen means to take clothes off
And
Umziehen means to change clothes.
Next slide
German two-verb constructions are similar to some English two verb constructions. For
example, to go shopping means einkaufen gehen in German.
Next slide
Many separable-prefix verbs and two-verb constructions are used to describe daily
activities.
For example:
Aufstehen = to get up, fernsehen = to watch TV, anziehen = to put on clothes
zurückkommen =to come back, einkaufen gehen= to go shopping, spazieren gehen =to go
for a walk.
Next slide
There are several important rules that you need to remember about this type of verbs.
First of all, not all prefixes are separable.
Next slide
Most prefixes that are separable are the prefixes that resemble prepositions or adverbs.
For example,
AUF as a preposition means on in English. But it can also act as a separable prefix when
attached to a verb. For example, aufstehen means to get up and aufmachen means ‘to
open.
FERN is an adverb that translates into English as far. But it can also be a separable
prefix. For example, fernsehen means to watch TV.
Next slide
Other common separable prefixes that might be already familiar to you as prepositions or
adverbs are: Auf, an, aus, bei, mit, nach, vor, um, zu, zurück, zusammen
Explanations of the meaning of these and other separable prefixes is beyond the scope of
this presentation. But you can find such explanations in most grammar reference books.
Next slide
225
Inseparable prefixes are those prefixes that cannot be used independently as prepositions
or adverbs –. For example, prefixes be-, er-, ver-, or zer- are inseparable.
Besuchen = to visit‘
Erwarten = to expec
Verkaufen = to sell
Zerstören = to destroy
You should remember that inseparable prefixes never get disconnected from the verb.
Next slide
Again, there are more inseparable prefixes in German than be, er, ver, zer, for example
EmpEntGeAnd MissAlso are examples of inseparable prefixes.
Next slide
Now let’s practice. Please take a look at these verbs now and decide – Which prefixes
can be separated?
20 seconds no text
Next slide
Let’s take a look. Verbs umdrehen, aufhören; anrufen; zurückfahren, and ausgehen are
separable-prefix verbs because um, auf, an, zurück, and aus can be used in other contexts
as prepositions or adverbs.
Next slide
Now you might wonder why it is so important to know which prefix is separable.
It is important because of what happens to the separable-prefix verbs and two-verb
constructions in a regular sentence.
Next slide
In a sentence, you need to separate the separable-prefix from the verb and place at the
very end of the sentence.
Next slide
Let’s take a look at the verb ankommen and how to use it in a sentence. First you should
separate the prefix an from the main verb. If you want to use this verb in a sentence, for
example to say in German “I arrive in August” you should first put the main verb
kommen in its usual second position in the sentence and don’t forget to conjugate it. And
finally, you move the separable-prefix an to the very end of the sentence. Now you have:
Ich komme im August an.
Next slide
The same happens with two-verb constructions. Let’s take the verb construction
spazieren gehen as an example. First, you separate the verbs in the two verb construction.
If you want to use it in a sentence, for example to say “I’m going for a walk with Maria’
you put the second verb in the pair in the second position in the sentence. Don’t forget to
conjugate it! Then you place the first verb of the pair at the very end of the sentence.
Now you have:
Ich gehe mit Maria spazieren.
Next slide
Now please take a second and finish the sentences using these verbs.
226
Let’s check together!
Aufstehen
Ich stehe um 8 Uhr auf.
Fernsehen
Wir sehen abends fern.
spazieren gehen.
Ich gehe abends mit Maria spazieren.
Next slide
We have just learned that in sentences separable prefixes get disconnected from the verb
and are placed at the very end of the sentence. But there are also situations when you do
NOT need to separate the separable-prefix verbs and two-verb constructions.
Next slide
Here is your final rule for today. When the sentence has a modal verb or the verb werden,
you do NOT disconnect the separable-prefix verbs or two-verb constructions.
Before we see how it works in a sentence, let’s quickly review the modal verbs:
Müssen means must
Können means can or able to
Dürfen means allowed to
Mögen means to like
Sollen means should
Wollen means want
Werden means will (this verb is used to indicated future tense)
Next slide
When you have a modal verb or the verb werden in a sentence, you should put any other
verb in the sentence to the very final position without any changes to its infinitive form.
For example, let’s take the verb ankommen again. Previously in this presentation, you
separated an and placed it at the very end of the sentence to say ‘Ich komme im August
an’.
Now, if you add a modal verb wollen for example to say „My friend WANTS to arrive in
August“, you should place the modal verb in the second spot in the sentence - don’t
forget to conjugate it. Then you place the WHOLE verb in its infinitive form at the very
end of the sentence. Now we have:
Mein Freund will im August ankommen.
Next slide
The same happens with two-verb constructions. Again, let’s take the verb spazieren
gehen as an example. In a sentence without a modal verb, you say:
Ich gehe mit Maria spazieren
But once you add a modal verb, for example sollen, to say ‘I SHOULD go for a walk
with Maria’, you place the conjugated modal verb in the second spot in the sentence.
Then you place both verbs in the two-verb construction at the very end of the sentence.
Now we have:
Ich soll mit Maria spazieren gehen.
Next slide
Now let’s practice. Please finish these sentences.
20 seconds no text
Next slide
227
Let’s check together.
Ich will abends fernsehen.
Ich soll morgen einkaufen gehen.
Ich werde morgen mit Maria spazieren gehen.
Next slide
Now, let’s recap. Most separable prefixes resemble other independent words, such as
prepositions or adverbs. If a prefix does NOT resemble a preposition or an adverb, it’s
inseparable and it should always stay attached to the verb.
When we talk about the position of the separable-prefix verbs and two-verb constructions
in a sentence, you should remember to separate the separable prefix and place it at the
very end of the sentence. You should do the same to the first verb in the two-verb
constructions – you place it at the very end of the sentence.
But remember that if you have a modal verb in the sentence or the verb werden, you
should place the other verbs at the very end of the sentence in the unchanged infinitive
form.
Next slide
This concludes our presentation on separable-prefix verbs and two-verb constructions.
We hope that this presentation helped you review main rules for this topic. Tschuss!
228
APPENDIX
XD
ONLINE INSTRUCT
TIONS FOR
R THE INS
STRUCTOR
RS
229
APPENDIX E
SURVEY FOR THE INSTRUCTORS
1)
In the space below, please enter your first and last name
2)
Today you saw a computer-based grammar tutorial about ("Regular Verb
Conjugation in German" or “Separable-Prefixes in German”. Is that correct?
Yes
3)
No
You had three possible choices for this tutorial. Which one did you select?
Static - slides with static text with a voice-over narration
Animated - slides with animated text with a voice-over narration
Video - a video of a teacher
4)
Why did you select this format of the tutorial and did not select the other
ones?
5)
Please indicate your agreement with the following statement and add a
comment to explain your response: I liked my overall experience watching
this computer-based grammar tutorial:
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
6)
In the space below, please describe your overall impression from this
presentation, detailing whether you find it a valuable teaching tool.
7)
The explanations in this tutorial are appropriate and valid for explaining this
grammatical topic
Strongly agree
8)
Undecided
Disagree
Strongly disagree
This tutorial is appropriate for students at the Elementary German level.
Strongly agree
9)
Agree
Agree
Undecided
Disagree
Strongly disagree
Today you saw an example of a computer-based grammar tutorial. Such
tutorials can focus on different topics in grammar. The next several questions
address the computer-based grammar tutorials of this type in general. In the
space below, please describe what you see as advantages of this kind of the
computer-based grammar tutorial in general.
230
10)
In the space below, please describe what you see as disadvantages of this kind
of the computer-based grammar tutorial in general.
11)
Please select your gender:
Male
Female
12)
Please select your age group:
13)
18 - 24
25 - 34
35 - 44
Are you a native speaker of German?
14)
55 and older
Please describe your experience teaching German in the U.S.: for how many
years, what levels of German, etc.
Yes
15)
45 - 54
No
If you have any other comments or suggestions about this tutorial, please add
them in the space below.
231
APPENDIX
XF
VC TEST
T
232
Note: The posttests for veerb conjugattion includedd the same ittems but in a different orrder.
233
APPENDIX
XG
SPV TES
ST
234
Note:
N
The posttests for seeparable-preffix verbs inccluded the saame items buut in a differrent
orrder.
235
APPENDIX
XH
DESIGN
N QUESTIO
ONNAIRE
236
APPENDIX I
SATISFACTION QUESTIONNAIRE IN EXPERIMENT 1 (VC)
Questions on the online VC Questionnaire3
1) In the space below, please indicate the unique nickname that you used in this study:
2) Today you saw a computer-based grammar tutorial about "Regular Verb Conjugation
in German". Is that correct?
Yes
No
Not sure
3) The format of this tutorial was a series of slides with static text and with a voice-over
narration. Is that correct?
Yes
No
Not sure
4) Please indicate your agreement with this statement: I watched the video very
carefully, paying attention to every slide.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
5) Please indicate your agreement with the following statement and add a comment to
explain your response: I liked my overall experience working with a computer-based
grammar tutorial in class today.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
6) Please rate the overall quality of this tutorial as a learning tool:
Very good
Good
Acceptable
Poor
Very poor
7) Please indicate your agreement with the following statements: This tutorial was
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
8) This tutorial was entertaining.
3 This is an example of the questionnaire that was created for the static tutorials. The
other two tutorials included the same questions, but the tutorial description throughout the
surveys referred to animated mode or mode with a recording of a teacher.
237
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
Undecided
Disagree
Strongly disagree
9) This tutorial was engaging.
Strongly agree
Agree
10) In the next two questions, please be as detailed as possible. Thank you. In the space
below, please describe what you liked about this presentation.
11) In the space below, please describe what you did not like about this presentation.
12) Please indicate your agreement with the following statement and add a comment to
explain your response: The rule explanations in this presentation were easy to follow and
understand.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
13) Please indicate your agreement with the following statement and add comments to
explain your response: Before this review session, I already knew all the rules pretty well.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
14) Please indicate your agreement with the following statement and add a comment to
explain your response: My knowledge of this grammatical topic improved after this
review session.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
15) Please indicate your agreement with the following statements and add comments to
explain your responses: After this review, I am certain I can apply the rules for this
grammatical topic correctly in speaking.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
16) After this review, I am certain I can apply the rules for this grammatical topic
correctly in writing.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
17) When during this presentation you had time for practice, did you complete the tasks?
I completed all of them
I completed most of them
I completed only some of them
I didn’t complete any of them
18) Today you saw an example of a computer-based grammar tutorial. Such tutorials can
focus on different topics in grammar. The next several questions address the computer-
238
based grammar tutorials of this type in general. In the space below, please describe what
you see as advantages of this kind of the computer-based grammar tutorial in general.
19) In the space below, please describe what you see as disadvantages of this kind of the
computer-based grammar tutorial in general.
20) Please indicate your agreement with the following statement and add a comment to
explain your response: I would support the idea of having such computer-based grammar
tutorials available to me as a part of my German course.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
21) Please indicate your agreement with the following statement and add a comment to
explain your response: If computer-based grammar tutorials like the presentation in this
study were available to me as a part of my German courses, I would use them.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
22) Please explain your response to the previous question, detailing why or why not you
would use such tutorials.
23) If you have any other additional comments about this tutorial, please add them in the
space below.
239
APPENDIX J
SATISFACTION QUESTIONNAIRE IN EXPERIMENT 2 (SPV)
Questions on the online SPV Questionnaire4
1) In the space below, please indicate the unique nickname that you used in this study:
2) Today you saw a computer-based grammar tutorial about "Separable-Prefix Verbs and
Two-Verb Constructions in German". Is that correct?
Yes
No
Not sure
3) The format of this tutorial was a series of slides with static text and with a voice-over
narration. Is that correct?
Yes
No
Not sure
4) Please indicate your agreement with this statement: I watched the video very
carefully, paying attention to every slide.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
5) Please indicate your agreement with the following statement and add a comment to
explain your response: I liked my overall experience working with a computer-based
grammar tutorial in class today.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
6) Please rate the overall quality of this tutorial as a learning tool:
Very good
Good
Acceptable
Poor
Very poor
7) Please indicate your agreement with the following statements: This tutorial was
helpful.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
8) This tutorial was entertaining.
4 This is an example of the questionnaire that was created for the static tutorials. The
other two tutorials included the same questions, but the tutorial description throughout the
surveys referred to animated mode or mode with a recording of a teacher.
240
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
Undecided
Disagree
Strongly disagree
9) This tutorial was engaging.
Strongly agree
Agree
10) In the next two questions, please be as detailed as possible. Thank you.In the space
below, please describe what you liked about this presentation.
11) In the space below, please describe what you did not like about this presentation.
12) Please indicate your agreement with the following statement and add a comment to
explain your response: The rule explanations in this presentation were easy to follow and
understand.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
13) Please indicate your agreement with the following statement and add comments to
explain your response: Before this review session, I already knew all the rules pretty well.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
14) Please indicate your agreement with the following statement and add a comment to
explain your response: My knowledge of this grammatical topic improved after this
review session.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
15) Please indicate your agreement with the following statements and add comments to
explain your responses: After this review, I am certain I can apply the rules for this
grammatical topic correctly in speaking.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
16) After this review, I am certain I can apply the rules for this grammatical topic
correctly in writing.
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
17) When during this presentation you had time for practice, did you complete the tasks?
I completed all of them
I completed most of them
I completed only some of them
I didn’t complete any of them
18) In this study, you worked with two tutorials in different presentation modes. From
the options below, please select the one that applies to you. On Monday. I reviewed the
rules for the regular verb conjugation in German with:
241
The tutorial with animated text and a voice-over narration
The tutorial with a recording of a real teacher
I wasn't here on Monday.
19) Which presentation mode did you like more? (if you were absent on Monday, please
click "Next Page" button to proceed)
If you worked with the animated tutorial on Monday, select your options here:
I liked ... more
The tutorial with ANIMATED text and a voice-over narration (Monday)
The tutorial with STATIC text and a voice-over narration (today)
No preference
20) OR If you worked with the tutorial with a recording of a real teacher on Monday,
select your options here:
I liked ... more
The tutorial with a recording of a REAL TEACHER (Monday)
The tutorial with STATIC text and a voice-over narration (today)
No preference
21) Please explain your answer in the field below, detailing why or why not you had a
preference for one type of the tutorial over another.
22) If you have any other additional comments about this tutorial, please add them in the
space below.
242
APPENDIX
XK
DEMOGRA
APHIC QUE
ESTIONNA
AIRE
243
APPENDIX L
INTERVIEW QUESTIONS
Outline for the semi-structured post-study interviews
1. Please describe your experience working with the first tutorial?
a. Did you like working with this tutorial? Why? Why not?
b. Did you find it helpful? Entertaining? Engaging? Why? Why not?
2. Please describe your experience working with the second tutorial?
a. Did you like working with this tutorial? Why? Why not?
b. Did you find it helpful? Entertaining? Engaging? Why? Why not?
3. Do you think it’s a valuable learning tool? Why? Why not?
4. Did you like the design of the tutorials, the pace of the information?
5. If next time you had a choice between these two tutorials, which one would you
select? Why?
6. What do you think about such computer-based grammar tutorials in general?
7. In your opinion, what are the advantages of such tutorials? What are the
disadvantages?
8. Would you want such tutorials to be available to you as part of your German class?
Why? Why not?
9. If such tutorials were available to you, would you use them?
10. Could you describe a situation when you think you would use such tutorials?
11. If such tutorials were available as an option in your German class, would you want
them to be optional or mandatory? Why?
244
APPENDIX
A
XM
INST
TRUCTION
NS FOR TH
HE PARTICIPANTS
245
APPENDIX N
SCORING CRITERIA FOR THE TARGET STRUCTURES
1. Criteria for scoring the tests on the regular verb conjugation (VC)
Out of 37 points
Rule (VC): The main rule is to drop the infinitive ending –en and add a set of personal
pronoun endings to the verb stem. This rule, however, has a set of sub-rules depending on
the last letter of the verb stem. Thus, if the last letter of the stem is –t or –d, an additional
–e is added to the endings for the second-person singular and plural and third-person
singular. The same sub-rule applies to verb stems that end in the consonant clusters –gn, chn, -dm, or –ffn. However, a different sub-rule applies if the stem ends in –s, –z, –ss, or
–ß: the personal ending for second-person singular becomes –t instead of –st.
For example:
Main rule: geh-en ‘to go’)  stem: geh-  conjugation: ich geh-e, du geh-st, er geh-t,
etc. (‘I go,’ ‘you go,’ ‘he goes’)
Main rule + sub-rule for stems ending in –t, –d, –gn, -chn, -dm, or –ffn: finden ‘to find’
 stem: find-  conjugation: ich find-e, du find-e-st, er find-e-t, etc. (‘I find,’ ‘you find,’
‘he finds’)
Main rule + sub-rule for stems ending in –s, –z, –ss, or –ß: tanzen ‚to dance’  stem:
tanz-  conjugation: ich tanz-e, du tanz-t, er tanz-t, etc. (‘I dance,’ ‘you dance,’ ‘he
dances’)
Scoring scheme: Take off a point of each verb if the conjugation ending is incorrect.
Disregard all other mistakes. Also, if there is nothing written in one or more blanks – take
a point off for each blank.
Examples of grading:
Mein Bruder Georg tanzt - correct
246
Mein Bruder Georg tanzst – incorrect, take a point off.
Du lernst – correct
Du liernst – correct although there is a misspelling.
Du lernest – incorrect, take a point off.
2. Criteria for scoring the tests on the separable-prefix verbs (SPV)
Partial scoring
Out of 28 points
Rule (SPV). The main rule is to separate the prefix from the main verb and place it in the
very last spot in the sentence, whereas the main verb always takes the usual second spot
in the sentence. A sub-rule here states that prefixes that resemble prepositions or adverbs
should be separated from the main verb and placed at the very end of the sentence.
Prefixes that do not resemble other parts of speech, such as ver-, zer-, er-, and be-, are
considered non-separable and should stay connected with the main verb. Another subrule concerns the position of separable-prefix verbs or two-verb constructions after modal
verbs and the verb werden. If a sentence has a modal verb or the verb werden, these verbs
take the second position in the sentence and then the other verb (separable part together
with the main verb) is placed at the very end of the sentence in its infinitive form.
For example:
Main rule: aufstehen (to get up): Ich stehe um 7 Uhr auf. (I get up at 7 a.m.)
Main rule + sub-rule for prefixes that do not resemble prepositions or adverbs:
verstehen (to understand): Ich verstehe dich. (I understand you.)
Main rule + sub-rule for positions after modal verbs: aufstehen + sollen: Ich soll um 7
Uhr aufstehen.
247
Scoring criteria for correct/incorrect scoring
Out of 12 points
Scoring scheme: Take off a point of each sentence if it is incorrect with regard to the
separable-prefix verbs and two-verb constructions. Disregard all other mistakes. Also, if
there is nothing written in or the sentence is not completed – take a point off for each
blank.
Examples of grading:
Ich komme im August an. – correct
Ich kommen im August an. – correct
Ich komme an im August. – incorrect
Ich komme im August - incorrect
Criteria for partial scoring
Out of 28 points
There are different scoring criteria depending on the sentence type:
1. For sentences with a separable-prefix verb or a two-verb construction maximum 3 points
 1 point awarded if the main verb is in the second place
 1 point awarded if the main verb and the prefix are separated and switched
 1 point awarded if the final position is occupied either by a verb or a prefix
Disregard all other mistakes, such as incorrect conjugation endings, incorrect
pronouns, or misspellings.
Examples:
Ich gehe abends mit Maria spazieren. – all correct (3 points)
Ich spazieren gehen abends mit Maria. – verb second (1 point), not switched (0 points),
final position not taken (0 points) – total: 1 point
248
Ich gehen abends spazieren mit Maria – verb second (1 point), switched (1 point), final
position not taken (0 points) – total: 2 points
Ich abends mit Maria spazieren gehen. – no verb in the second position (0 points), not
switched (0 points), final position taken (1 point) – total: 1 point.
Ich abends spazieren gehen mit Maria. – no verb in the second position (0 points), not
switched (0 points), no verb in the final position (0 points) – total: 0 points
2. For sentences with verbs with inseparable prefixes – maximum 1 point
 1 point is awarded ONLY if the prefix is not separated. Disregard ANY other
mistakes.
Examples:
Ich verkaufe mein Auto nächsteWoche. – correct (1 point)
Ich mein Auto nächsteWoche verkaufe. – correct (1 point)
Ich kaufe mein Auto nächsteWoche ver. – incorrect (0 points)
3. For sentences with a modal verb and a separable-prefix verb or two-verb
construction.
 1 point is awarded if the modal verb is in the second position.
 1 point is awarded if the form of the separable-prefix verb wasn’t changed.
 1 point is awarded if the separable-prefix verb or the prefix is in the final position.
Disregard all other mistakes.
Examples:
Ich will mit Maria abends spazieren gehen. – all correct (3 points)
Abends ich will mit Maria spazieren gehen. – modal verb not in the second position (0
points), verb form not changed (1 point), final position taken (1 point) – total: 2 points
Abends ich will mit Maria gehen spazieren. – modal verb not in the second position (0
points), verb form changed (0 points), final position taken (1 point) – total: 1 point
249
Ich gehe spazieren mit Maria gehen will. – modal verb not in the second position (0
points), verb form changed (0 points), final position not taken (0 points) – total: 0 points
Ich soll im August an kommen. – modal second (1 point), verb form changed (0 points),
final position taken (1 point). – total: 2 points.
Ich soll kommen im August an. – modal second (1 point), verb form changed (0 points),
final position taken (1 point) – total: 2 points
Note:
‐
if the space for the sentences is left completely blank, take off the maximum
number of points for this type of the sentence.
If the sentence has been started, but abandoned, grade the part that’s written,
regard the missing parts as mistakes.
Example:
‐
Ich will ……………………………………… (for Ich will mit Maria spazieren
gehen). Modal second (1 point), switched (0 points), final position taken (0 points) –
total: 1 point
‐
If there is a comma in the sentence, regard it as a completely new sentence after
the comma
Example:
Abends, ich gehe mit Maria spazieren – verb second (1 point), switched (1 point),
final position taken (1 point) – total: 3 points.
Abends, gehe ich mit Maria spazieren – verb NOT second (0 points), switched (1
point), final position taken (1 point) – total: 2 points
250
APPENDIX O
OUTPUT FOR THE STATISTICAL ANALYSES FOR RQ 1
Results for pretest-posttest comparison (VC)
Table O1. Multivariate tests for test scores (VC)
Effect
Test
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Test *
Tutorial type
Value
F
.755
.245
3.085
Hypothesis
df
Error df
Sig.
262.226a
1.000
85.000
.000*
262.226
a
1.000
85.000
.000*
262.226
a
1.000
85.000
.000*
a
1.000
85.000
.000*
Roy's Largest
Root
3.085
262.226
Pillai's Trace
.018
.787a
2.000
85.000
.458
Wilks' Lambda
.982
.787
a
2.000
85.000
.458
Hotelling's Trace
.019
.787a
2.000
85.000
.458
.019
a
2.000
85.000
.458
Roy's Largest
Root
.787
Table O2. Tests of between-subjects effects for test scores (VC)
Source
Type III Sum of
Squares
df
Mean Square
F
Sig.
Intercept
94893.828
1
94893.828
1102.753
.000
Tutorial type
235.236
2
117.618
1.367
.260
Error
7314.400
85
86.052
251
Results for ratings of test difficulty (VC)
Table O3. Multivariate tests for test difficulty ratings (VC)
Effect
Test
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Test * Tutorial
type
Value
F
.462
.538
.860
Hypothesis
df
Error df
Sig.
68.776a
1.000
80.000
.000*
68.776
a
1.000
80.000
.000*
68.776
a
1.000
80.000
.000*
a
1.000
80.000
.000*
Roy's Largest
Root
.860
68.776
Pillai's Trace
.021
.854a
2.000
80.000
.430
.979
.854
a
2.000
80.000
.430
.854
a
2.000
80.000
.430
.854
a
2.000
80.000
.430
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.021
.021
Table O4. Tests of between-subjects effects for test difficulty ratings (VC)
Source
Type III Sum
of Squares
df
Mean Square
F
Sig.
Intercept
1533.027
1
1533.027
1323.878
.000
Tutorial type
4.816
2
2.408
2.080
.132
Error
92.639
80
1.158
252
Results for pretest-posttest comparison (SPV)
Table O5. Multivariate tests for test scores (SPV)
Source
Test
Test * Tutorial
type
Error(Test)
Type III Sum
of Squares
df
Mean Square
F
Sig.
Sphericity Assumed
365.521
1
365.521
169.537
.000*
Greenhouse-Geisser
365.521
1.000
365.521
169.537
.000*
Huynh-Feldt
365.521
1.000
365.521
169.537
.000*
Lower-bound
365.521
1.000
365.521
169.537
.000*
Sphericity Assumed
2.844
2
1.422
.660
.520
Greenhouse-Geisser
2.844
2.000
1.422
.660
.520
Huynh-Feldt
2.844
2.000
1.422
.660
.520
Lower-bound
2.844
2.000
1.422
.660
.520
Sphericity Assumed
189.727
88
2.156
Greenhouse-Geisser
189.727
88.000
2.156
Huynh-Feldt
189.727
88.000
2.156
Lower-bound
189.727
88.000
2.156
Table O6. Tests of between-subjects effects for test scores (SPV)
Source
Type III Sum of
Squares
df
Mean Square
F
Sig.
Intercept
11472.536
1
11472.536
1810.413
.000
Tutorial type
31.995
2
15.997
2.524
.086
Error
557.654
88
6.337
253
Results for pretest-posttest comparison (SPV, partial)
Table O7. Multivariate tests for test scores (SPV, partial scoring)
Effect
Test
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Test * Tutorial
type
Value
F
.635
.365
1.737
Hypothesis
df
Error df
Sig.
152.893a
1.000
88.000
.000*
152.893
a
1.000
88.000
.000*
152.893
a
1.000
88.000
.000*
a
1.000
88.000
.000*
Roy's Largest
Root
1.737
152.893
Pillai's Trace
.002
.100a
2.000
88.000
.905
Wilks' Lambda
.998
.100
a
2.000
88.000
.905
Hotelling's Trace
.002
.100a
2.000
88.000
.905
.002
a
2.000
88.000
.905
Roy's Largest
Root
.100
Table O8. Tests of between-subjects effects for test scores (SPV)
Source
Type III Sum of
Squares
df
Mean Square
F
Sig.
Intercept
84355.290
1
84355.290
4400.334
.000
Tutorial type
86.945
2
43.473
2.268
.110
Error
1686.978
88
19.170
254
Results for ratings of test difficulty (SPV)
Table O9. Multivariate tests for test difficulty ratings (SPV)
Effect
Test
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Test * Tutorial
type
Value
F
.404
.596
.678
Hypothesis
df
Error df
Sig.
58.335a
1.000
86.000
.000*
58.335
a
1.000
86.000
.000*
58.335
a
1.000
86.000
.000*
a
1.000
86.000
.000*
Roy's Largest
Root
.678
58.335
Pillai's Trace
.018
.772a
2.000
86.000
.465
Wilks' Lambda
.982
.772
a
2.000
86.000
.465
Hotelling's Trace
.018
.772a
2.000
86.000
.465
.018
a
2.000
86.000
.465
Roy's Largest
Root
.772
Table O10. Tests of between-subjects effects for test difficulty ratings (SPV)
Source
Type III Sum of
Squares
df
Mean Square
F
Sig.
Intercept
1858.427
1
1858.427
1836.931
.000
Tutorial type
2.238
2
1.119
1.106
.336
Error
87.006
8
6
1.012
255
APPENDIX P
RESULTS FOR RQ 1 WITH MISSING VALUES ADDED
Results for pretest-posttest comparison (VC)
Table P1. Descriptive statistics for test scores (VC, missing values added)
Experiment 1 (VC) – all data
Pretest
(missing values added)
Posttest
(missing values added)
Mean
Std. Deviation
N
ST
19.6628
6.74092
38
AN
18.7389
5.86301
38
RT
16.5357
6.25055
39
Total 18.2970
6.37779
115
ST
28.1127
7.32042
38
AN
28.4811
6.46082
38
RT
26.3643
7.66451
39
Total 27.6415
7.16863
115
Table P2. Multivariate tests for test scores (VC, missing values added)
Effect
Test
Value
Pillai's Trace
.717
Wilks' Lambda .283
Hotelling's Trace 2.532
Roy's Largest
2.532
Root
Test * Tutorial type Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.011
.989
.012
.012
F
283.620a
283.620a
283.620a
283.620a
Hypothesis
df
1.000
1.000
1.000
1.000
Error
df
112.000
112.000
112.000
112.000
Sig.
.000*
.000*
.000*
.000*
.644a
.644a
.644a
.644a
2.000
2.000
2.000
2.000
112.000
112.000
112.000
112.000
.527
.527
.527
.527
256
Table P3. Tests of between-subjects effects for test scores (VC, missing values
added)
Source
Type III Sum of
Squares
df
Mean Square
F
Sig.
Intercept
Tutorial type
Error
121467.728
275.364
8216.706
1
2
112
121467.728
137.682
73.363
1655.698
1.877
.000
.158
Results for ratings of test difficulty (VC)
Table P4.Descriptive statistics for test difficulty ratings (VC, missing values added)
Experiment 1 (VC) – all data
Mean
Std. Deviation
N
Pretest difficulty ratings
(missing values added)
ST
AN
RT
Total
3.4615
3.3218
3.7847
3.5255
.86918
.73152
.59255
.75660
37
38
39
114
Posttest difficulty ratings
(missing values added)
ST
AN
RT
Total
2.8492
2.5824
2.7023
2.7100
.94761
.90977
.83194
.89537
37
38
39
114
Table P5. Tests of between-subjects effects for test difficulty ratings (VC, missing
values added)
Source
Type III Sum of
Squares
df
Mean Square
F
Sig.
Intercept
Tutorial type
Error
2214.129
3.423
105.644
1
2
111
2214.129
1.712
.952
2326.372
1.798
.000
.170
257
Table P6. Multivariate tests for test difficulty ratings (VC, missing values added)
Experiment 1 (VC) all data (missing
values added)
Value
F
a
Hypothesis
df
Error df Sig.
Test
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.460
.540
.854
.854
94.739
94.739a
94.739a
94.739a
1.000
1.000
1.000
1.000
111.000
111.000
111.000
111.000
.000*
.000*
.000*
.000*
test * tutorial type
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.049
.951
.051
.051
2.857a
2.857a
2.857a
2.857a
2.000
2.000
2.000
2.000
111.000
111.000
111.000
111.000
.062
.062
.062
.062
Results for pretest-posttest comparison (SPV)
Table P7. Descriptive statistics for test scores (SPV, missing values added)
Experiment 2 (SPV) all data (missing values
added)
Mean
Std. Deviation
N
Pretest
(missing values added)
ST
AN
RT
Total
19.3235
19.2556
18.3417
18.9901
3.68227
4.18472
3.91766
3.92775
37
39
35
111
Posttest
(missing values added)
ST
AN
RT
Total
24.0276
23.9130
23.4459
23.8039
2.61494
3.07272
3.71140
3.13351
37
39
35
111
258
Table P8. Multivariate tests for test scores (SPV, missing values added)
Effect
Value
F
a
Hypothesis
df
Error df Sig.
Test
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.661
.339
1.954
1.954
211.045
211.045a
211.045a
211.045a
1.000
1.000
1.000
1.000
108.000
108.000
108.000
108.000
.000*
.000*
.000*
.000*
Test * Tutorial type
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.003
.997
.003
.003
.178a
.178a
.178a
.178a
2.000
2.000
2.000
2.000
108.000
108.000
108.000
108.000
.837
.837
.837
.837
Table P9. Tests of between-subjects effects for test scores (SPV, missing values
added)
Source
Type III Sum of
Squares
df
Mean Square
F
Sig.
Intercept
Tutorial type
Error
101322.608
26.201
2089.631
1
2
108
101322.608
13.101
19.348
5236.733
.677
.000
.510
Results for ratings of test difficulty (SPV)
Table P10. Descriptive statistics for test difficulty ratings (SPV, missing values
added)
Experiment 2 (SPV) all data (missing values
added)
Mean
Std. Deviation
N
Pretest difficulty ratings
(missing values added)
ST
AN
RT
Total
3.5023
3.7329
3.7022
3.6458
.82217
.58982
.81161
.74476
37
39
34
110
Posttest difficulty ratings
(missing values added)
ST
AN
RT
Total
2.8751
2.9475
3.1169
2.9755
.82791
.72354
.91346
.81942
37
39
34
110
259
Table P11. Tests of between-subjects effects for test difficulty ratings (SPV, missing
values added)
Source
Tests of Between-Subjects Effects
Type III Sum of
Squares
df
Mean Square
Intercept
Tutorial type
Error
2406.730
1.835
94.402
1
2
107
2406.730
.917
.882
F
Sig.
2727.912
1.040
.000
.357
Table P12. Multivariate tests for ratings of test difficulty (SPV, missing values
added)
Multivariate Testsb
Effect
Value
F
a
Hypothesis
df
Error df Sig.
Test
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.397
.603
.657
.657
70.328
70.328a
70.328a
70.328a
1.000
1.000
1.000
1.000
107.000
107.000
107.000
107.000
.000*
.000*
.000*
.000*
Test * Tutorial type
Pillai's Trace
Wilks' Lambda
Hotelling's Trace
Roy's Largest
Root
.011
.989
.011
.011
.601a
.601a
.601a
.601a
2.000
2.000
2.000
2.000
107.000
107.000
107.000
107.000
.550
.550
.550
.550
260
APPENDIX Q
RESULTS FOR RQ 2 WITH EXCLUSION CRITERIA APPLIED
Table Q1. Descriptive statistics for satisfaction ratings in Experiment 1 (VC)
N
Mean
Std.
Deviation
ANOVA
Sig.
Rating of tutorial:
overall satisfaction
ST
AN
RT
Total
29
26
27
82
3.9310
4.1154
3.8148
3.9512
.79871
.81618
.87868
.83003
.419
Rating of tutorial:
helpful
ST
AN
RT
Total
29
26
28
83
4.2414
4.3462
4.0357
4.2048
.51096
.48516
.88117
.65814
.210
ST
AN
RT
Total
29
26
28
83
2.6207
3.2308
2.5714
2.7952
.94165
1.06987
.87891
.99706
.025*
ST
AN
RT
Total
29
26
28
83
3.0690
3.5769
3.1071
3.2410
.99753
.94543
.99403
.99499
.114
ST
AN
RT
Total
29
26
28
83
3.9655
4.3077
3.9643
4.0723
.82301
.61769
.88117
.79300
.190
Rating of tutorial:
entertaining
Rating of tutorial:
engaging
Rating of tutorial:
value as a learning tool
261
Table Q2. Descriptive statistics for satisfaction ratings in Experiment 2 (SPV)
N
Mean
Std.
Deviation
ANOVA
Sig.
Rating of tutorial:
overall satisfaction
ST
AN
RT
Total
28
33
29
90
3.7143
3.8182
3.7241
3.7556
1.01314
.91701
.75103
.89079
.881
Rating of tutorial:
helpful
ST
AN
RT
Total
28
33
29
90
4.1786
4.0909
4.0000
4.0889
.61183
.67840
.46291
.59293
.529
Rating of tutorial:
entertaining
ST
AN
RT
Total
28
33
29
90
2.6071
2.9697
3.0000
2.8667
.91649
.88335
.92582
.91431
.194
Rating of tutorial:
engaging
ST
AN
RT
Total
28
33
29
90
3.0000
3.5152
3.3103
3.2889
.98131
.87039
.84951
.91485
.088
Rating of tutorial:
value as a learning tool
ST
AN
RT
Total
28
33
28
89
3.8929
3.9394
3.7143
3.8539
.87514
1.08799
.71270
.91142
.612
262
Table Q3. Frequency counts for general attitude ratings (across experiments)
Frequency
Percent
“Computer-based grammar tutorials of this kind should be available
in my German language class.”
Strongly disagree
1
Disagree
5
Undecided
15
Agree
36
Strongly agree
26
1.2 %
6.0 %
18.1 %
43.4 %
31.3 %
Total
100.0 %
83
“If computer-based grammar tutorials of this kind were available to
me, I would use them.”
Strongly disagree
3
Disagree
3
Undecided
10
Agree
42
Strongly agree
25
3.6 %
3.6 %
12.0 %
50.6 %
30.1 %
Total
100.0 %
83
Table Q4. ANOVA results for general attitude ratings (across experiments)
Sum of
Squares
df
Between Groups
Within Groups
Total
1.751
68.201
69.952
2
80
82
“If computer-based grammar Between Groups
tutorials of this kind were
Within Groups
available to me, I would use Total
them.”
1.909
72.091
74.000
2
80
82
ANOVA
“Computer-based grammar
tutorials of this kind should
be available in my German
language class.”
Mean
Square
F
Sig.
.875
.853
1.027
.363
.954
.901
1.059
.352
263
Table Q5. Ratings for the static tutorials across experiments
STATIC
Group
N
Mean
Std.
deviation
Sig. (2-tailed)
Equal variances
assumed
Rating of tutorial:
overall satisfaction
VC
29
3.9310
.79871
.373
SPV
28
3.7143
1.01314
Rating of tutorial: helpful
VC
29
4.2414
.51096
SPV
28
4.1786
.61183
Rating of tutorial:
entertaining
VC
29
2.6207
.94165
SPV
28
2.6071
.91649
Rating of tutorial:
engaging
VC
29
3.0690
.99753
SPV
28
3.0000
.98131
Rating of tutorial: value
as a learning tool
VC
29
3.9655
.82301
SPV
28
3.8929
.87514
.675
.956
.794
.748
Table Q6. Ratings for the animated tutorials across experiments
ANIMATED
Group
N
Mean
Std.
deviation
Sig. (2-tailed)
Equal variances
assumed
Rating of tutorial:
overall satisfaction
VC
26
4.1154
.81618
.200
SPV
33
3.8182
.91701
Rating of tutorial: helpful
VC
26
4.3462
.48516
SPV
33
4.0909
.67840
Rating of tutorial:
entertaining
VC
26
3.2308
1.06987
SPV
33
2.9697
.88335
Rating of tutorial:
engaging
VC
26
3.5769
.94543
SPV
33
3.5152
.87039
Rating of tutorial: value
as a learning tool
VC
26
4.1923
.80096
SPV
32
4.0625
.84003
.111
.309
.795
.553
264
Table Q7. Ratings for the tutorials with a recording of a teacher across experiments
RECORDING OF A
TEACHER
Group
N
Mean
Std.
deviation
Sig. (2-tailed)
Equal variances
assumed
Rating of tutorial:
overall satisfaction
VC
27
3.8148
.87868
.679
SPV
29
3.7241
.75103
Rating of tutorial: helpful
VC
28
4.0357
.88117
SPV
29
4.0000
.46291
Rating of tutorial:
entertaining
VC
28
2.5714
.87891
SPV
29
3.0000
.92582
Rating of tutorial:
engaging
VC
28
3.1071
.99403
SPV
29
3.3103
.84951
Rating of tutorial: value
as a learning tool
VC
28
3.9643
.88117
SPV
28
3.7143
.71270
.848
.079
.410
.248