Model-Facilitated Learning - Association for Educational

36
Model-Facilitated Learning
Ton de Jong and Wouter R. van Joolingen
University of Twente, Enschede, the Netherlands
CONTENTS
Introduction .....................................................................................................................................................................458
Learning from Computer Models ...................................................................................................................................458
Learning by Creating Computer Models........................................................................................................................461
Model-Based Inquiry Learning.......................................................................................................................................463
Conclusions .....................................................................................................................................................................465
References .......................................................................................................................................................................466
ABSTRACT
KEYWORDS
In this chapter, we discuss the possible roles of models
in learning, with computer models (simulations) as our
focus. In learning from models, students’ learning processes center around the exploration of a model by
changing values of input variables and observing resulting values of output variables. In this process, they
experience rules of the simulated domain or discover
aspects of these rules. Models can also play a role in
the learning process when we ask students to construct
models. In learning by modeling, students are required
to construct an external model that can be simulated to
reproduce phenomena observed in a real system.
Finally, both ways of using models can be combined
in what we refer to as model-based inquiry learning.
Here, students encounter a computer model that they
can explore by changing values of the input variables
and by observing values of the output variables and
then they reconstruct the model, including its internal
functioning, so both models will behave similarly.
Inquiry learning: “An approach to learning that
involves a process of exploring the natural or material world, and that leads to asking questions, making discoveries, and rigorously testing those discoveries in the search for new understanding” (NSF,
2000, p. 2).
Model: Structured representation of a system in terms
of variables or concepts and their (quantitative or
qualitative) relations that can be used for predicting
system behavior by means of simulations.
Modeling: The process of creating simulations as a
means for learning.
Simulation: Computer-based model of a natural process
or phenomenon that reacts to changes in values of
input variables by displaying the resulting values of
output variables.
457
Ton de Jong and Wouter R. van Joolingen
INTRODUCTION
In many domains, especially in science, learning
involves the acquisition and construction of models
(Lehrer and Schauble, 2006). Models are defined as
“a set of representations, rules, and reasoning structures that allow one to generate predictions and explanations” (Schwarz and White, 2005, p. 166). Models
can be seen as structured representations of (parts of)
domains in terms of variables or concepts and their
interconnections. Each scientific domain has a set of
externally represented domain models that are generally agreed upon by researchers working in these
domains. Individuals have personal models that may
be externally represented or that may be mental models
(Gentner and Stevens, 1983). Scientific practice can
be seen as a process of constantly adapting, refining,
or changing models, under the influence of observations or of constraints set by the properties of the
models themselves. In a similar vein, learning science
consists of creating and adapting mental models with
the aim of moving the mental model toward an expert
or theoretical domain model (Clement, 2000; Snyder,
2000). Such adaptation of mental models may evolve
with gradual modifications or involve more radical
changes in the nature of the mental model (Chi, 1992).
In its influential report, the American Association
for the Advancement of Science (AAAS, 1989) stated
that students need time to explore, make observations,
take wrong turns, test ideas, rework tasks, build things,
calibrate instruments, collect things, construct physical
and mathematical models, learn required mathematics
and other relevant concepts, read materials, discuss and
debate ideas, wrestle with unfamiliar and counterintuitive ideas, and explore alternative perspectives.
According to this description, learning resembles creating and adapting mental models by using scientific
inquiry. In 2000, the National Science Foundation
defined inquiry learning as “an approach to learning
that involves a process of exploring the natural or
material world, and that leads to asking questions,
making discoveries, and rigorously testing those discoveries in the search for new understanding” (NSF,
2000, p. 2).
In model-facilitated learning, the natural or material world in the above definition is replaced by a
model. These models can take many forms (e.g., a
simplified sketch or a concept map; see Gobert, 2000);
however, in this chapter, we speak of learning from
models only when students can interact with the model,
which means that they can manipulate input to the
model with a reaction of the model as a result. As a
further specification, we focus on computer models
(simulations) (de Jong, 1991) in this chapter. This
458
restriction means that the models we discuss are executable; that is, they use some computational algorithm
to generate output (i.e., a change in the values describing the model’s state) on the basis of students’ input
(Hestenes, 1987). This process is called simulation. In
learning from models, students’ learning processes
center around the exploration of a model by changing
values of input variables and observing resulting values of output variables. In the process, they experience
rules of the simulated domain or discover aspects of
these rules (de Jong, 2006a). Models can also play a
role in the learning process when we ask students to
construct models. In this learning by modeling, students are required to construct an external model, with
the objective of making the model behave as much like
the real system as possible (Penner, 2001). Finally,
both ways of using models can be combined in what
we refer to as model-based inquiry learning. Here,
students receive a model that they can explore by
changing values of the input variables and observing
values of the output variables. They then have to reconstruct this model, including its internal functioning, in
such a way that both models will behave similarly
(Löhner et al., 2005; van Joolingen et al., 2005).
In this chapter, we discuss three approaches to
using models in education: one in which students try
to grasp the properties of an existing model (learning
from models), one in which students learn from creating models (learning by modeling), and a way of learning in which these two forms are combined. In doing
so, we concentrate on learning in the science domains.
LEARNING FROM
COMPUTER MODELS
In learning from computer (simulation) models, students try to build a mental model based on the behavior
of a given model with which they can experiment.
There is a large variety of models and of possible ways
to interact with them (van Joolingen and de Jong,
1991), but students basically interact with a computer
model through a model interface that allows them to
change values of variables in the model and that displays the computed results of their manipulations in
one way or another.
Computer technology supporting learning from
computer models began to be developed in the late
1970s and 1980s. Of course, many simulations existed
that were used more or less directly in an educational
context, but only a few systems were specifically
geared toward education. Many of these systems primarily concerned either operational models or a combination of operational and conceptual models.
Model-Facilitated Learning
SOPHIE, for example, was an environment for teaching electronic troubleshooting skills, but it was also
designed to give students insight into electronic laws,
circuit causality, and the functional organization of
particular devices (Brown et al., 1982). QUEST also
focused on electronic circuit troubleshooting (White
and Frederiksen, 1989). QUEST used model progression; circuits became increasingly more complex as
students progressed through QUEST, and they could
view circuits from different perspectives (e.g., a functional or a behavioral perspective). Another system that
combined the learning of operational and conceptual
knowledge was STEAMER. This system simulated a
complex steam propulsion system for large ships (Hollan et al., 1984). Systems such as MACH-III for complex radar devices (Kurland and Tenney, 1988) and
IMTS (Towne et al., 1990) also focused on troubleshooting. Smithtown was one of the first educational
simulations that targeted a conceptual domain (economic laws) and that included several support mechanisms for students (Shute and Glaser, 1990). In
Smithtown, students could explore simulated markets.
They could change such variables as labor costs and
population income and observe the effects on, for
example, prices. A further example of early conceptual
simulations for education was ARK (Alternate Reality
Kit), a set of simulations on different physics topics
(e.g., collisions) that provided students with direct
manipulation interfaces (Scanlon and Smith, 1988;
Smith, 1986).
Although scaffolds were already present to some
degree in the systems cited here, research has emphasized the awareness that learning from models can only
be successful if the student is sufficiently scaffolded.
Unscaffolded inquiry is generally seen as not fruitful
(Mayer, 2004). Cognitive scaffolds can be integrated
with the simulation software and aim at one or more
of the inquiry processes mentioned above. Overviews
of systems that contain cognitive scaffolds have been
presented by de Jong and van Joolingen (1998), Quintana et al. (2004), Linn et al. (2004), and recently de
Jong (2006b).
Identifying the basis of adequate scaffolding
requires a detailed insight into the learning processes
associated with learning from models (de Jong and van
Joolingen, 1998). The overall learning process that is
associated with learning from models is a process of
scientific discovery or inquiry. The National Research
Council in 1996 defined inquiry as a multifaceted
activity involving making observations, posing questions, examining various sources of information, planning investigations, reviewing what is known, using
tools to gather and interpret data, proposing explanations and predictions, and communicating findings;
inquiry requires the identification of explicit assumptions, the use of critical and logical thinking, and the
creation and consideration of alternative explanations
(NRC, 1996). This description lists a large set of processes that constitute inquiry learning. De Jong
(2006b) presented a number of processes that encompass the processes mentioned in the NRC definition:
orientation, hypothesis generation, experimentation
(i.e., experiment design, prediction, data interpretation), drawing a conclusion, and making an evaluation.
In orientation, the general research issue is determined
and the student makes a broad analysis of the domain;
in hypothesis generation, a specific statement (or a set
of statements, for example, in the form of a model)
about the domain is chosen for consideration; in experimentation, a test to investigate the validity of this
hypothesis or model is designed and performed, predictions are made, and outcomes of the experiments
are interpreted; in conclusion, a conclusion about the
validity of the hypothesis is drawn or new ideas are
formed; and, finally, in evaluation, a reflection on the
learning process and the domain knowledge acquired
is made. A central and developing product in the
inquiry learning process is the student’s mental model
of the domain (White and Frederiksen, 1998).
Figure 36.1 presents a diagrammatic attempt to
depict the development of a student’s mental model
throughout the inquiry process. In this figure, the mental model in orientation has loose ends, relations are
not yet defined, and variables are missing. When a
student generates a hypothesis, a relation between variables is selected, and an idea (still uncertain) about
this relation is formed. Of course, the ideas that are
formed in the hypothesis phase are not necessarily
constrained to single hypotheses but may refer to
broader parts of a model (see the next section). In
experimentation, a move to more manipulable variables is made. When designing an experiment, the
conceptual variables are operationalized in variables
that can be manipulated. In prediction, the hypothesis
that was stated is translated into observable variables.
In data interpretation, the outcomes of the experiment
are known, and an understanding of the data must be
reached. Stating a conclusion involves returning to a
more theoretical level, in which the data that were
interpreted are related to the hypothesis or mental
model under consideration and decisions on the validity of the original ideas are made.
In Figure 36.1, the process of experimentation is
at the level of manipulable (operationalized) variables,
whereas the domain view in the processes of orientation, hypotheses, and conclusion is at the level of theory. Ideally, a student’s view of the domain should go
from orientation through hypotheses to conclusion,
459
Ton de Jong and Wouter R. van Joolingen
Orientation
A
B
Hypotheses
Conclusion
A
A
+?
+
B
B
1, 2, 3, 4
4, 5, 6, 7
Experimentation
Figure 36.1 An overview of the student’s mental model of inquiry processes (ovals are variables, lines represent relations). (From
de Jong, T., in Dealing with Complexity in Learning Environments, Elen, J. and Clark, R.E., Eds., Elsevier, London, 2006, pp.
107–128. With permission.)
resulting in a correct and complete mental model of
the domain. In practice, however, after going through
these learning processes a student’s mental model will
often still have some open ends (an orientation character), unresolved issues (a hypothesis aspect), and
some firm ideas (conclusions, but still some of these
may be faulty). This emphasizes the iterative character
of the inquiry learning process.
The processes mentioned above directly yield
knowledge (as is reflected in the developing view of
the domain). de Jong and Njoo (1992) refer to these
processes as transformative inquiry processes, reflecting the transformation of information into knowledge.
Because inquiry learning is a complex endeavor with
a number of activities and iterations, de Jong and Njoo
(1992) added the concept of regulation of learning,
comprised of processes aimed at planning and monitoring the learning process. Together, transformative
and regulative processes form the main inquiry learning processes (de Jong and van Joolingen, 1998).
Evaluation takes a special place, located somewhere
between transformative and regulative processes. In
evaluation (or reflection), students examine the inquiry
process and its results and try to take a step back to
learn from their experiences. This reflection may concern the inquiry process itself (successful and less successful actions) as well as the domain under investigation (e.g., general domain characteristics). As is the case
with all inquiry processes, evaluation activities can
occur at any point in the cycle, not just during evaluation. Evaluation activities can influence the inquiry process itself and thus have a regulative character.
460
Smaller scale evaluations of inquiry learning often
concentrate on assessing the effects of different types
of scaffolding. This work shows that the effectiveness
of inquiry learning can be greatly improved by offering
students adequate scaffolds (de Jong, 2006a,b). Largescale evaluations of technology-based inquiry environments comparing them to more traditional modes of
instruction are not very frequent, but a few of these
large-scale evaluations do exist. Smithtown, a supportive simulation environment in the area of economics,
was evaluated in a pilot study with 30 students and in
a large-scale evaluation with a total of 530 students.
Results showed that after 5 hours of working with
Smithtown, students reached a degree of micro-economics understanding that would have required
approximately 11 hours of traditional teaching (Shute
and Glaser, 1990). The Jasper project offers another
classic example of a large-scale evaluation. The
domain in this project is mathematics, and students
learn in real contexts in an inquiry type of setting.
Although Jasper is not a pure inquiry environment, the
learning has many characteristics of inquiry, as students collect and try to interpret data. Evaluation data
involving over 700 students showed that students who
followed the Jasper series outperformed a control
group that received traditional training on a series of
assessments (Cognition and Technology Group at
Vanderbilt, 1992).
White and Frederiksen (1998) described the ThinkerTools Inquiry Curriculum, a simulation-based learning environment on the physics topic of force and
motion. The ThinkerTools software guides students
Model-Facilitated Learning
through a number of inquiry stages that include experimenting with the simulation, constructing physics
laws, critiquing each other’s laws, and reflecting on
the inquiry process. ThinkerTools was implemented in
12 classes with approximately 30 students each. Students worked daily with ThinkerTools over a period
of a little more than 10 weeks. A comparison of the
ThinkerTools students with students in a traditional
curriculum showed that the ThinkerTools students performed significantly better on a (short) conceptual test
(68% vs. 50% correct). Even the students who scored
low on a test for general basic skills from the ThinkerTools curriculum had a higher average conceptual
physics score (58%) than the students who followed
the traditional curriculum.
Hickey et al. (2003) assessed the effects of the
introduction of a simulation-based inquiry environment (GenScope) on the biology topic of genetics. In
GenScope students can manipulate genetic information at different levels: DNA, chromosomes, cells,
organisms, pedigrees, and populations. Students, for
example, can change the chromosomes (e.g., for presence or absence of wings or horns) of virtual dragons,
breed these dragons, and observe the effects on the
genotype and phenotype of the offspring. A large-scale
evaluation was conducted involving 31 classes (23
experimental, 8 comparison) taught by 13 teachers and
a few hundred students in total. Overall, the evaluation
results showed better performance by the GenScope
classes compared to the traditional classes on tests
measuring genetic reasoning. A follow-up study with
two experimental classes and one comparison class
also showed significantly higher gains for the two
experimental classes on a reasoning test, with a higher
gain for students from the one of these two groups in
which more investigation exercises were offered.
Another recent example is the River City project.
The River City project software is intended to teach
biology topics and inquiry skills. It is a virtual environment in which students move around with avatars.
River City contains simulations, databases, and multimedia information. Students have to perform a full
investigation following all of the inquiry processes
listed above and end their investigation by writing a
letter to the mayor of the city. Preliminary results of a
large evaluation (involving around 2000 students) of
the River City project showed that, compared to a control group who followed a paper-based inquiry based
curriculum, the technology-based approach led to a
higher increase in biology knowledge (32 to 34% vs.
17%) and better achievement on tests for inquiry skills
(Ketelhut et al., 2006). Linn et al. (2006) evaluated
modules created in the Technology-Enhanced Learning
in Science (TELS) center. These modules are inquiry
based and contain simulations (e.g., on the functioning
of airbags). Over a sample of 4328 students and 6
different TELS modules, an overall effect size of 0.32
in favor of the TELS subjects over students following
a traditional course was observed on items that measured how well students’ knowledge was integrated.
LEARNING BY CREATING
COMPUTER MODELS
Apart from observing simulations based on formal
models, students can also learn from constructing
these models themselves (Alessi, 2000). This approach
is in line with the basic ideas behind constructionism
(Harel and Papert, 1991; Kafai, 2006; Kafai and
Resnick, 1996), of which the main focus is “knowledge
construction that takes place when students are
engaged in building objects” (Kafai and Resnick,
1996, p. 2). Objects that are constructed can be physical objects and artifacts (Crismond, 2001), drawings
(Hmelo et al., 2000), concept maps (Novak, 1990),
computer programs (Mayer and Fay, 1987), instruction
(Vreman-de Olde and de Jong, 2006), and more. In
this section, we focus on constructing executable models, the same kind of models that are explored in the
situations described in the previous section; instead of
exploring these models, the students’ task becomes
one of constructing them.
Science has always used models to understand a
domain. Simulation as a tool to predict a model’s
behavior was one of the first applications of computers
as they became available shortly after World War II.
The use of constructing models in the process of learning science goes back to the early 1980s, when Jon
Ogborn created the Dynamical Modelling System
(DMS) (Ogborn and Wong, 1984). In this system, students could create a model of a dynamical system by
entering equations that described an initial state and
the change of that state over time. Even before these
attempts, Jay Forrester had developed his ideas on
system dynamics, a way of representing processes in
business organizations, which soon acquired a wider
use as a versatile tool to model any kind of system
(Forrester, 1961). An example of a system dynamics
model is provided in Figure 36.2. This model uses the
system dynamics notation introduced by Forrester
(1961). The water level is represented by a stock (rectangle) and the outflow as a flow (the thick arrow pointing to the cloud. The thin arrows indicate relations
between the variables.
At first, system dynamics models were created as
drawings that were used as a tool for reasoning. Later
these models were used as a guideline to create computer
461
Ton de Jong and Wouter R. van Joolingen
Water_Level
Leak_Size
Outflow_Rate
Figure 36.2 System dynamics model of a leaking water bucket.
programs, and eventually systems such as STELLA
(Steed, 1992) were introduced that allowed direct simulation of system dynamics models. The educational
value of these systems was immediately recognized,
and other systems following the same basic system
dynamics ideas such as Model-It (Jackson et al., 1996)
and Co-Lab (van Joolingen et al., 2005) were created.
These newer systems improved on user-friendliness by
offering alternative ways of specifying the model, but
they adhere to the same basic principle: The student
specifies a model drawn as a graphical structure that
can be executed (simulated), yielding outcomes that
are the consequences of the ideas expressed in the
model. Through all of these developments we see an
evolution toward tools that make it easier for students
to create formal models.
A modeling activity starts from a scientific problem. Students generally find it very difficult to generate
an adequate research question, and they often need
help in arriving at a good research question (White and
Frederiksen, 1998); therefore, students are often provided with an assignment that asks them to model a
certain phenomenon (van Joolingen et al., 2005;
White, 1993). The overall goal of a student is to create
a model in such a way that the behavior of the model
mimics the behavior of a theoretical model or the
behavior of a real phenomenon. Hestenes (1987)
described a (formal) model as a mathematical entity
that consists of named objects and agents, variables
to define the properties of these objects, equations that
describe the development of variable values over time,
and an interpretation that links the modeling concepts
to objects in the real world. This characterizes the
model as a computational (runnable, executable) entity
that can be used for simulation. More recently, the
modeling literature has also included qualitative models in which the development of variable values is
defined in terms of (qualitative) relations rather than
equations (Dimitracopoulou et al., 1999; Jackson et
462
al., 1996; Papaevripidou et al., 2007; Schwarz and
White, 2005; van Joolingen et al., 2005), but this does
not essentially change Hestenes’ conceptualization.
Hestenes’ (1987) conceptualization suggests that
to construct a model students need to iterate through
three types of processes: orientation, in which the
objects and variables are identified and defined; specification, in which the relations and equations between
variables are specified; and evaluation, in which the
outcomes of the model are interpreted in terms of the
real world and matched to expectations. In orientation,
the student identifies objects and variables and makes
an initial sketch of the model; in specification, the
relations between the variables are specified in a qualitative or quantitative form that allows computation,
and additional variables may be introduced. In evaluation, the model structure is assessed, model output is
evaluated against outcome expectations, and the model
output is compared with observations.
Initial evidence suggests that learning by modeling
has positive effects on the understanding of dynamic
systems. Kurtz dos Santos et al. (1997) reported transfer from a modeled domain to a new one. Schecker
(1998) found that after a mechanics course using
STELLA, five out of ten pairs of students were able
to construct a qualitative causal reasoning chain on a
new subject. Mandinach (1988) found that modeling
led to better conceptual understanding of the content
and the solution and an increase in problem-solving
abilities. Mandinach and Cline (1996) noted a marked
improvement in students’ inquiry skills as an effect of
modeling. Schwarz and White (2005) found that students who had received a modeling facility as part of
a ThinkerTools (White and Frederiksen, 1998) environment improved on an inquiry post-test and on far
transfer problems. Papaevripidou et al. (2007) found
that students who used a modeling approach with a
modeling tool acquired better modeling skills than students who used a more traditional worksheet and were
also able to model the domain in an increasingly
sophisticated way.
Apart from these first results, evidence to support
these claims of learning by modeling is still scarce,
especially when it comes to experimental studies (Löhner, 2005). Research is limited to qualitative studies
that provide mainly anecdotal evidence, often with
only two (Resnick, 1994; Wilensky and Reisman,
2006) or even one (Buckley, 2000; Ploger and Lay,
1992) subject. Spector (2001) attributes this lack of
focus on quantitative evidence to the fact that most
researchers in this field believe that the standard measures of learning outcomes are not adequate for a serious evaluation of learning in these environments.
Although this may be true, it indicates a mission for
Model-Facilitated Learning
the field to try to implement instruments that actually
assess the knowledge that is acquired through learning
by modeling.
An instrument to measure system dynamics thinking, operationalized as the ability to interpret data in
terms of a model and to distinguish a value and its
rate of change, has been developed by Booth
Sweeney and Sterman (2000). The focus of this
instrument is limited to some basic skills in systemdynamics-based modeling. Van Borkulo and van Joolingen have developed an instrument that aims to cover
the complete range of knowledge types addressed by
learning based on the creation of models. In their
overview, the different learning outcomes are operationalized into four categories of test items, related
to the kind of reasoning process for which the knowledge is used. Reproducing factual domain knowledge
is the first category, relating to the idea that in modeling one acquires knowledge about the domain.
Model-based reasoning appears as applying a model
to given situations, more specifically as predicting
and explaining model behavior, by performing a mental simulation of the model. Learning about modeling
is reflected in two categories: evaluating a model—
that is to say, determining its correctness or suitability
for a given goal—and creating a model or parts of it.
These four categories can be evaluated at two levels:
the node level of individual relations in a model and
the structure level, in which the effects of multiple
interacting relations are at stake. Moreover, the categories of apply, evaluate, and create can be considered at both a domain-general and a domain-specific
level. Initial tests with this instrument show that it
can detect various aspects of model-based reasoning.
Such instruments should eventually lead to systematically collected evidence of the benefits of learning
by modeling as well as more detailed knowledge on
supporting modeling processes.
MODEL-BASED INQUIRY LEARNING
Much of the modeling literature sees modeling as a
stand-alone activity. In most of the activities described,
the modeling process takes place in the absence of data
that are to be modeled. As such, modeling remains a
purely theoretical activity. Löhner et al. (2003) as well
as Schwarz and White (2005) presented work in which
models are used to describe data generated from a
given simulation. Modeling thus becomes an integrated part of the inquiry process. In this section, a
short description of a specific learning environment,
Co-Lab (van Joolingen et al., 2005), is presented. CoLab offers an environment in which students can work
on scientific inquiry tasks collaboratively in small
groups and in which they are offered a modeling tool.
In Co-Lab, students have the opportunity to explore
existing models, to create formal models with a dedicated modeling language based on system dynamics,
and to compare their own model outcomes with the
data generated by a given simulation or collected from
an experiment.
A typical Co-Lab task is to construct a model of
a phenomenon that is found within the environment,
either as a simulation or as a remote laboratory that
can be controlled from a distance. In one Co-Lab environment, for example, students can connect to a small
greenhouse that contains a plant, along with sensors
that measure the levels of CO2, O2, and H2O, as well
as the temperature and the intensity of the light. The
goal for this environment is to construct a model that
describes the rate of photosynthesis as a function of
the amount of available light. To accomplish this, students can use the data obtained from the sensors and
manipulate the intensity of light by repositioning a
lamp (specially made for use in greenhouses) and
determining when it should be on or off. They can thus
create graphs that yield the photosynthesis rate for
each level of lighting. Combining these results allows
them to model the photosynthesis rate as a function of
the light level.
A Co-Lab environment is divided into different
buildings, with each building consisting of a number
of floors. A building represents a domain (in this case,
greenhouse effect), and a floor represents a subdomain (e.g., photosynthesis) or a specific level of difficulty, similar to the idea of model progression also
found in SimQuest (van Joolingen and de Jong, 2003)
and in earlier work by White and Frederiksen (1990).
Each floor is composed of four rooms: the hall, a lab
room, a theory room, and a meeting room. The photosynthesis scenario in this Co-Lab environment
starts in the hall, the default entry room for all CoLab environments. In the hall, students meet each
other and find a mission statement that explains the
goal of the floor in the form of a research problem
(e.g., creating a model that explains the photosynthesis rate); they also receive some background information they need to get started. After having read this
mission statement, they can move to the lab, in which
they find a remote connection to the greenhouse. They
can see the greenhouse through a webcam, and they
can control it and inspect the greenhouse parameters
using a dedicated interface. They can start a measurement series and plot the development of the data in
a graph. Data obtained this way can be stored as
datasets in an object repository. In the theory room,
students find a system dynamics modeling tool that
463
Ton de Jong and Wouter R. van Joolingen
Phenomenon to
explore
Comparing
output with data
Background
information
Modeling the
phenomenon
Figure 36.3 Example of a modeling tool. (Courtesy of Co-Lab.)
allows for both qualitative (relations such as “if A
increases then B increases”) and quantitative (equations) modeling. In the theory room, students can
inspect the datasets they have stored in the repository
(which is shared across rooms) and use these as reference for their model. This can be done by plotting
model output and observed data in one graph and
comparing the two, or by using the observed data as
an element in the model. Finally, students can plan
and monitor their work in the meeting room. They
can review important steps in the inquiry and modeling processes, such as planning experiments and
evaluating models, using a process coordinator (Manlove et al., 2006). They can make notes that record
the history of their learning process and can eventually be used as the basic ingredients for a report that
they write to conclude the activity.
Co-Lab’s main characteristic is that it combines
learning from models and learning by modeling in one
environment. These activities take place in the lab and
theory room, respectively. In doing so, the environment
offers opportunities to make the learning process more
transparent. Hypotheses become visible as models or
parts of models, their predictions can be made visible
as model output, and the validity of models can be
assessed with reference to the data collected from the
domain model present in the lab. It is also possible to
assess student’s models based on a structural comparison of the model with a reference model (Bravo et al.,
2006), indicating that the domain model may operate
not only as a source of data but also as a resource for
tutoring.
464
Figure 36.3 shows an example from the Co-Lab
learning environment. The editor displays a model in
the system dynamics formalism that is also used by
STELLA and PowerSim. The graph shows the result
of running this model (of warming of the Earth under
the influence of solar radiation). In this example, the
student has created a model of a physics topic (a black
sphere problem) and has run the model to inspect its
behavior. The model is expressed in terms of a graphic
representation linking different kinds of variables, as
well as equations or relations that detail the behavior
of the model. The results can be expressed as graphs
(as in Figure 36.3), tables, and animations.
Co-Lab has been evaluated in a number of experimental studies focusing on specific aspects of the
environment. Sins et al. (2007) found that learners
with task-oriented motivation performed more deep
learning processes such as changing and running the
model with a reference to prior knowledge, which in
turn led to better models. They believe that the mode
of communication between collaborators (online chat
vs. face to face) influences the modeling process.
Chatting learners used the modeling tool not only as
a place to construct the model but also as a means of
communication, resulting in many more small
changes to the model they were constructing. Manlove et al. (2006) found that providing learners with
regulative support in the form of a so-called process
coordinator led to better performance on the modeling task. This finding highlights the need for instructional support in this kind of complex learning environment.
Model-Facilitated Learning
CONCLUSIONS
In this chapter, we have discussed three modes of
learning in which (computer) models play a pivotal
role. One is learning from models in which students
gather knowledge about a model underlying a simulation through inquiry learning. The second one is learning by modeling, in which students learn by creating
models. Finally, we presented an example of a system
in which both ways of learning are combined, yielding
an integrated process of inquiry.
Learning from models and learning by modeling
share a number of characteristics, but there are also
differences. Both ways of learning generate knowledge
of the domain that is involved in the model (e.g., a
physics topic such as motion). Penner (2001) asserts
that the main difference between learning from models
in simulation-based environments and learning by modeling is that in the first case the underlying model stays
hidden to the students (they have no direct access to this
model), whereas in learning by modeling the exact characteristics of the model are central. As a consequence,
a more intuitive type of knowledge is more likely to
evolve in learning from models (Swaak et al., 1998),
whereas in learning by modeling more explicit conceptual knowledge is likely to be acquired (White and Frederiksen, 2005). In both approaches, more general, process-directed knowledge is supposed to be acquired.
Löhner (2005), for example, identified learning about
modeling and the modeling process as an important
learning outcome of learning by modeling. Learning
about modeling is seen as important because science
and technology have become increasingly important in
society; reasoning with models, including model construction as well as awareness of the limitations of scientific models, is therefore seen as an important part of
the science curriculum (Halloun, 1996). From modeling-based curricula, students should improve their modeling skills—that is, show effective modeling processes
and also obtain a better understanding of the epistemology of modeling (Hogan, 1999; Hogan and Thomas,
2001). Model-based reasoning skills reflect the ability
to use a model instrumentally to predict or explain
behavior that can be observed in a modeled system. This
means, for example, being able to predict the development of the temperature of the atmosphere under the
influence of an increasing CO2 concentration from a
model of climate change (given or self-constructed).
This requires mental simulation of the model—that is,
reasoning from the relations given to projected values
of variables or in the reverse direction, from observed
values of variables to relations that explain these observations. Knowledge about performing sound scientific
investigations is acquired in inquiry learning. This
includes more general skills such as knowing how to
follow an inquiry cycle (White and Frederiksen, 2005)
as well as more specific knowledge of experimentation
heuristics (Veermans et al., 2006) or strategies on how
to cope with anomalous data (Lin, 2007). For a more
complete overview, see Zachos et al. (2000). These
inquiry skills are seen as important for students to
become self-directed researchers.
The learning processes involved also show similarities. The processes of scientific inquiry as we have
identified them (orientation, hypothesis generation,
experimentation, and conclusion) strongly resemble
the modeling processes (orientation, specification, and
evaluation). There are, however, two basic differences.
First, a hypothesis (or set of hypotheses) as present in
learning from models does not have to form a runnable
model. Second, in learning by modeling experimentation is not necessary to gather data for creating a
model; instead, students can gather their information
from many sources and use this as input for creating
their model. One of the assumptions underlying the
combination of learning from models and learning by
modeling (as in the Co-Lab environment) is that both
approaches can reinforce each other. Evidence for this
claim can be found in Schwarz and White (2005). They
found that students who received a modeling facility
in ThinkerTools also improved on a test of inquiry
skills. A comparison of the modeling-enhanced
Thinker-Tools curriculum with a traditional ThinkerTools curriculum showed no overall differences in an
inquiry skills test except for a subscale measuring the
students’ ability to formulate conclusions. A correlational analysis of students who followed the ThinkerTools curriculum with a modeling facility showed that
at the pretest there were no significant correlations
among knowledge of modeling, inquiry, and physics.
At the post-test, however, these three tests correlated
significantly, indicating that development in each of
these three knowledge areas is mutually reinforcing.
Whatever approach is chosen, it is clear that students cannot perform inquiry, modeling, or a combination of the two without scaffolding (Klahr and
Nigam, 2004; Mayer, 2004). In inquiry research, a
large set of cognitive tools for inquiry has now been
developed (for recent overviews, see de Jong, 2006a,b;
Linn et al., 2004; Quintana et al., 2004). Comparable
scaffolds for modeling are only now about to emerge
(Bravo et al., 2006). Integrated environments such as
Co-Lab, in which different approaches are combined
and in which learning processes are scaffolded by collaboration and by an extensive set of cognitive tools,
may provide students with learning opportunities that
help them gain both (intuitive and formal) domain and
general process knowledge.
465
Ton de Jong and Wouter R. van Joolingen
REFERENCES
Alessi, S. M. (2000). Building versus using simulations. In Integrated and Holistic Perspectives on Learning, Instruction,
and Technology, edited by J. M. Spector and T. M. Anderson,
pp. 175–196. Dordrecht: Kluwer.*
American Association for the Advancement of Science
(AAAS). (1989). Science for All Americans. New York:
Oxford University Press.*
Booth Sweeney, L. and Sterman, J. D. (2000). Bathtub dynamics: initial results of a systems thinking inventory. Syst.
Dynam. Rev., 16, 249–286.
Bravo, C., van Joolingen, W. R., and de Jong, T. (2006). Modeling and simulation in inquiry learning: checking solutions
and giving intelligent advice. Simul. Trans. Soc. Modeling
Simul. Int., 82(11), 769–784.
Brown, J. S., Burton, R. R., and de Kleer, J. (1982). Pedagogical,
natural language and knowledge engineering techniques in
Sophie I, II, and III. In Intelligent Tutoring Systems, edited
by D. Sleeman and J. S. Brown, pp. 227–282. London:
Academic Press.*
Buckley, B. C. (2000). Interactive multimedia and model-based
learning in biology. Int. J. Sci. Educ., 22, 895–935.
Chi, M. T. H. (1992). Conceptual change within and across
ontological categories: examples from learning and discovery in science. In Cognitive Models of Science, Vol. 15,
edited by R. N. Giere, pp. 129–186. Minneapolis, MN: University of Minnesota Press.*
Clement, J. (2000). Model based learning as a key research area
for science education. Int. J. Sci. Educ., 22, 1041–1053.
Cognition and Technology Group at Vanderbilt (CTGV).
(1992). The Jasper series as an example of anchored instruction: theory, program, description, and assessment data.
Educ. Psychol., 27, 291–315.*
Crismond, D. (2001). Learning and using science ideas when
doing investigate-and-redesign tasks: a study of naive, novice, and expert designers doing constrained and scaffolded
design work. J. Res. Sci. Teaching, 38(7), 791–820.
de Jong, T. (1991). Learning and instruction with computer
simulations. Educ. Comput., 6, 217–229.*
de Jong, T. (2006a). Computer simulations: technological
advances in inquiry learning. Science, 312, 532–533.*
de Jong, T. (2006b). Scaffolds for computer simulation based
scientific discovery learning. In Dealing with Complexity in
Learning Environments, edited by J. Elen and R. E. Clark,
pp. 107–128. London: Elsevier.
de Jong, T. and Njoo, M. (1992). Learning and instruction with
computer simulations: learning processes involved. In Computer-Based Learning Environments and Problem Solving,
edited by E. de Corte, M. Linn, H. Mandl, and L. Verschaffel,
pp. 411–429. Heidelberg: Springer-Verlag.
de Jong, T. and van Joolingen, W. R. (1998). Scientific discovery
learning with computer simulations of conceptual domains.
Rev. Educ. Res., 68, 179–202.
Dimitracopoulou, A., Komis, V., Apostolopoulos, P., and Pollitis, P. (1999). Design Principles of a New Modelling Environment for Young Students Supporting Various Types of
Reasoning and Interdisciplinary Approaches. Paper presented at the 9th International Conference on Artificial Intelligence in Education: Open Learning Environments—New
Computational Technologies to Support Learning, Exploration and Collaboration, July 19–23, Le Mans, France.
Forrester, J. W. (1961). Industrial Dynamics. Waltham, MA:
Pegasus Communications.*
466
Gentner, D. and Stevens, A. L., Eds. (1983). Mental Models.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Gobert, J. D. (2000). A typology of causal models for plate
tectonics: Inferential power and barriers to understanding
Int. J. Sci. Educ., 22, 937–977.
Halloun, I. (1996). Schematic modeling for meaningful learning
of physics. J. Res. Sci. Teaching, 33, 1019–1041.
Harel, I. and Papert, S. (1991). Constructionism. Norwood, NJ:
Ablex.
Hestenes, D. (1987). Towards a modeling theory of physics
instruction. Am. J. Phys., 55, 440–454.
Hickey, D. T., Kindfield, A. C. H., Horwitz, P., and Christie, M.
A. (2003). Integrating curriculum, instruction, assessment,
and evaluation in a technology-supported genetics environment. Am. Educ. Res. J., 40, 495–538.
Hmelo, C. E., Holton, D. L., and Kolodner, J. L. (2000). Designing to learn about complex systems. J. Learn. Sci., 9(3),
247–298.
Hogan, K. (1999). Relating students’ personal frameworks for
science learning to their cognition in collaborative contexts.
Sci. Educ., 83, 1–32.
Hogan, K. and Thomas, D. (2001). Cognitive comparisons of
students’ systems modeling in ecology. J. Sci. Educ. Technol., 10, 319–344.
Hollan, J. D., Hutchins, E. L., and Weitzman, L. (1984).
STEAMER: an interactive inspectable simulation-based
training system. AI Mag., 5, 15–27.
Jackson, S., Stratford, S. J., Krajcik, J., and Soloway, E. (1996).
Making dynamic modeling accessible to pre-college science
students. Interact. Learn. Environ., 4, 233–257.
Kafai, Y. B. (2006). Constructionism. In The Cambridge Handbook of the Learning Sciences, edited by R. K. Sawyer, pp.
35–47. Cambridge, U.K.: Cambridge University Press.
Kafai, Y. B. and Resnick, M., Eds. (1996). Constructionism in
Practice: Designing, Thinking, and Learning in a Digital
World. Mahwah, NJ: Lawrence Erlbaum Associates.
Ketelhut, D. J., Dede, C., Clarke, J., and Nelson, B. (2006). A
Multi-User Virtual Environment for Building Higher Order
Inquiry Skills in Science. Paper presented at the Annual
Meeting of the American Educational Research Association,
April 8–12, San Francisco.
Klahr, D. and Nigam, M. (2004). The equivalence of learning
paths in early science instruction: effects of direct instruction
and discovery learning. Psychol. Sci., 15, 661–668.*
Kurland, L. and Tenney, Y. (1988). Issues in developing an
intelligent tutor for a real-world domain: training in radar
mechanics. In Intelligent Tutoring Systems: Lessons
Learned, edited by J. Psotka, L. D. Massey, and S. Mutter,
pp. 59–85. Hillsdale, NJ: Lawrence Erlbaum Associates.*
Kurtz dos Santos, A., Thielo, M. R., and Kleer, A. A. (1997).
Students modelling environmental issues. J. Comput. Assist.
Learn., 13, 35–47.
Lehrer, R. and Schauble, L. (2006). Cultivating model-based
reasoning in science education. In The Cambridge Handbook of the Learning Sciences, edited by R. K. Sawyer, pp.
371–389. Cambridge, U.K.: Cambridge University Press.
Lin, J.-Y. (2007). Responses to anomalous data obtained from
repeatable experiments in the laboratory. J. Res. Sci. Teaching, 44(3), 506–528.
Linn, M. C., Bell, P., and Davis, E. A. (2004). Specific design
principles: elaborating the scaffolded knowledge integration
framework. In Internet Environments for Science Education,
edited by M. Linn, E. A. Davis, and P. Bell, pp. 315–341.
Mahwah, NJ: Lawrence Erlbaum Associates.
Model-Facilitated Learning
Linn, M. C., Lee, H.-S., Tinker, R., Husic, F., and Chiu, J. L.
(2006). Teaching and assessing knowledge integration in
science. Science, 313, 1049–1050.
Löhner, S. (2005). Computer Based Modelling Tasks: The Role
of External Representation. Amsterdam: University of
Amsterdam.
Löhner, S., van Joolingen, W. R., and Savelsbergh, E. R. (2003).
The effect of external representation on constructing computer
models of complex phenomena. Instruct. Sci., 31, 395–418.
Löhner, S., van Joolingen, W. R., Savelsbergh, E. R., and van
Hout-Wolters, B. H. A. M. (2005). Students’ reasoning during modeling in an inquiry learning environment. Comput.
Hum. Behav., 21, 441–461.
Mandinach, E. B. (1988). The Cognitive Effects of SimulationModeling Software and Systems Thinking on Learning and
Achievement. Paper presented at the Annual Meeting of the
American Educational Research Association, April, New
Orleans.
Mandinach, E. B. and Cline, H. F. (1996). Classroom dynamics:
the impact of a technology-based curriculum innovation on
teaching and learning. J. Educ. Comput. Res., 14, 83–102.*
Manlove, S., Lazonder, A. W., and de Jong, T. (2006). Regulative support for collaborative scientific inquiry learning. J.
Comput. Assist. Learn., 22, 87–98.
Mayer, R. E. (2004). Should there be a three-strikes rule against
pure discovery learning? Am. Psychol., 59, 14–19.*
Mayer, R. E. and Fay, A. L. (1987). A chain of cognitive changes
with learning to program in Logo. J. Educ. Psychol., 79(3),
269–279.
National Research Council (NRC). (1996). National Science
Education Standards. Washington, D.C.: National Academies Press.
National Science Foundation (NSF). (2000). An introduction to
inquiry. In Inquiry: Thoughts, Views and Strategies for the
K–5 Classroom, Vol. 2, pp. 1–5. Washington, D.C.: National
Science Foundation.*
Novak, J. D. (1990). Concept mapping: a useful tool for science
education. J. Res. Sci. Teaching, 27, 937–949.*
Ogborn, J. and Wong, D. (1984). A microcomputer dynamical
modelling system. Phys. Educ., 19, 138–142.
Papaevripidou, M., Constantinou, C. P., and Zacharia, Z. C.
(2007). Modelling complex marine ecosystems: an investigation of two teaching approaches with fifth graders. J.
Comput. Assist. Learn., 23(2), 145–157.
Penner, D. E. (2001). Cognition, computers, and synthetic science: Building knowledge and meaning through modelling.
Rev. Res. Educ., 25, 1–37.*
Ploger, D. and Lay, E. (1992). The structure of programs and
molecules. J. Educ. Comput. Res., 8, 347–364.
Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E.,
Duncan, R. G. et al. (2004). A scaffolding design framework
for software to support science inquiry. J. Learn. Sci., 13,
337–387.
Resnick, M. (1994). Turtles, Termites, and Traffic Jams. Cambridge, MA: MIT Press.*
Scanlon, E. and Smith, R. B. (1988). A rational reconstruction
of a bubble-chamber simulation using the alternate reality
kit. Comput. Educ., 12, 199–207.
Schecker, H. P. (1998). Physik—Modellieren, Grafikorientierte
Modelbildungssysteme im Physikunterricht. Stuttgart, Germany: Ernst Klett Verlag GmbH.
Schwarz, C. V. and White, B. Y. (2005). Metamodeling knowledge: developing students’ understanding of scientific modeling. Cogn. Instruct., 23, 165–205.
Shute, V. J. and Glaser, R. (1990). A large-scale evaluation of
an intelligent discovery world: Smithtown. Interact. Learn.
Environ., 1, 51–77.
Sins, P. H. M., van Joolingen, W. R., Savelsbergh, E., and van
Hout-Wolters, B. H. A. M. (2007). Motivation and performance within a collaborative computer-based modeling task:
relations between students’ achievement goal orientation,
self-efficacy, cognitive processing and achievement. Contemp. Educ. Psychol. (doi:10.1016/j.cedpsych.2006.12.004).
Smith, R. B. (1986). The Alternate Reality Kit: An Animated
Environment for Creating Interactive Simulations. Paper
presented at the IEEE Computer Society Workshop on Visual
Languages, June 25–27, Dallas, TX.
Snyder, J. L. (2000). An investigation of the knowledge structures of experts, intermediates and novices in physics. Int.
J. Sci. Educ., 22, 979–992.
Spector, J. M. (2001). Tools and principles for the design of
collaborative learning environments for complex domains.
J. Struct. Learn. Intell. Syst., 14, 484–510.*
Steed, M. (1992). STELLA, a simulation construction kit: cognitive process and educational implications. J. Comput.
Math. Sci. Teaching, 11(1), 39–52.
Swaak, J., van Joolingen, W. R., and de Jong, T. (1998). Supporting simulation-based learning; the effects of model progression and assignments on definitional and intuitive
knowledge. Learn. Instruct., 8, 235–253.*
Towne, D. M., Munro, A., Pizzini, Q., Surmon, D., Coller, L.,
and Wogulis, J. (1990). Model-building tools for simulationbased training. Interact. Learn. Environ., 1, 33–50.*
Van Borkulo, S. P. and van Joolingen, W. R. (2006). A Framework
for the Assessment of Modeling Knowledge. Poster presented
at the GIREP Conference, August 20–25, Amsterdam.
van Joolingen, W. R. and de Jong, T. (1991). Characteristics of
simulations for instructional settings. Education and Computing, 6, 241–262.
van Joolingen, W. R. and de Jong, T. (2003). SimQuest: authoring educational simulations. In Authoring Tools for
Advanced Technology Educational Software: Toward CostEffective Production of Adaptive, Interactive, and Intelligent
Educational Software, edited by T. Murray, S. Blessing and
S. Ainsworth, pp. 1–31. Dordrecht: Kluwer.*
van Joolingen, W. R., de Jong, T., Lazonder, A. W., Savelsbergh,
E., and Manlove, S. (2005). Co-Lab: research and development of an on-line learning environment for collaborative
scientific discovery learning. Comput. Hum. Behav., 21,
671–688.
Veermans, K. H., van Joolingen, W. R., and de Jong, T. (2006).
Using heuristics to facilitate scientific discovery learning in
a simulation learning environment in a physics domain. Int.
J. Sci. Educ., 28, 341–361.*
Vreman-de Olde, C. and de Jong, T. (2006). Scaffolding the
design of assignments for a computer simulation. J. Comput.
Assist. Learn., 22, 63–74.
White, B. Y. (1993). ThinkerTools: causal models, conceptual
change, and science education. Cogn. Instruct., 10, 1–100.*
White, B. Y. and Frederiksen, J. R. (1989). Causal models as
intelligent learning environments for science and engineering education. Appl. Artif. Intell., 3, 83–106.
White, B. Y. and Frederiksen, J. R. (1990). Causal model progressions as a foundation for intelligent learning environments. Artif. Intell., 42, 99–57.
White, B. Y. and Frederiksen, J. R. (1998). Inquiry, modelling,
and metacognition: making science accessible to all students. Cogn. Instruct., 16, 3–118.*
467
Ton de Jong and Wouter R. van Joolingen
White, B. Y. and Frederiksen, J. R. (2005). A theoretical framework and approach for fostering metacognitive development
Educ. Psychol., 40, 211–223.
Wilensky, U. and Reisman, K. (2006). Thinking like a wolf, a
sheep, or a firefly: learning biology through constructing and
testing computational theories—an embodied modeling
approach. Cogn. Instruct., 24, 171–209.
Zachos, P., Hick, T. L., Doane, W. E. J., and Sargent, C. (2000).
Setting theoretical and empirical foundations for assessing
scientific inquiry and discovery in educational programs. J.
Res. Sci. Teaching, 37, 938–962.
_________________________________________________________
* Indicates a core reference.
468