How Do Instructional Designers Use Theory?

How Do Instructional Designers Use Theory? A Qualitative-Developmental
Study of the Integration of Theory and Technology
Stephen C. Yanchar, Joseph B. South, David D. Williams
Brigham Young University
Brent G. Wilson
University of Colorado—Denver
Paper presented at AECT, October 2007, Anaheim
Published in Mike Simonson (Ed.), AECT Proceedings
An array of theories and models regarding learning, instruction, and design have been developed within
the academic setting to inform instructional design practices (e.g., Driscoll, 2000; Gagné, 1977; Gibbons, 2003;
Glaser, 2000; Reigeluth, 1983, 1999; Snelbecker, 1974). Many have observed that such theories are centrally
important to instructional scholarship and practice because they provide a means of helping scholars envision new
possibilities for research and practice, generating researchable questions and testable hypotheses, organizing
diverse concepts and observations into coherent accounts, developing taxonomies, describing and/or explaining
phenomena, offering guidelines for effective practice, guiding the innovation and design of technology, and
facilitating communication within and across discourse communities (Gibbons, 2003; Reigeluth, 1997; Seels,
1997; Snelbecker, 1974; Wilson, 1997).
It might be said more generally that there is no escaping theory in the process of disciplined inquiry and
in the development of increasingly effective forms of practice. As a number of authors have observed, methods of
scientific and technological research, as well as the everyday work of practitioners in a variety of fields, are given
shape, meaning, and purpose by the theories and assumptions that undergird them. Although much of this work
has focused on the theory-laden nature of scientific methods and research practices (e.g., Burgess-Limerick,
Abernathy, & Limerick, 1994; Hesse, 1980; Yanchar & Williams, 2006), it has also been observed that
technological research rests on assumptions and values about the nature of knowledge, human functioning, and
progress (Vincenti, 1990), as do various practical applications (Bucciarelli, 1994; Schön, 1983; Slife, 1987).
Given the importance of theory to research and practice, it is not surprising that many authors within the
instructional design literature have called for a more developed theoretical base within the field (e.g., Seels, 1997).
More thought-provoking and more troubling, however, is the view that practicing designers often find formal
theory to be irrelevant, too difficult and abstract to apply, or only occasionally useful. Even leading theorists in the
field have observed that theory in general—notwithstanding its inescapability—is often not recognized as
important by practitioners. For example, Wilson observed that there is a “generalized contempt for theories and
scholarship” (1997, p. 24). And Reigeluth wondered why “many people avoid and denigrate theories” (1997, p.
42) when theories, in some form or another (e.g., formal theories, personal theories, a background of theoretical
assumptions), are used ubiquitously.
Some have commented that academic researchers have not historically produced theories that are helpful
to practicing designers. Rowland (1992) suggested that scholars in the field of instructional design “may be
holding on to traditional views that no longer represent the requirements of practice” (p. 66) and that theories are
often “impractical and unrealistic” (p. 67). Perez and Emery (1995, p. 62) concluded that much of what
instructional designers need to know is “not currently reflected in theories of instruction.” And Wedman and
Tessmer (1993), who decried what the considered to be the inflexible nature of many instructional design models,
argued that most of those models are based on a set of assumptions “which appear to be incompatible with
practice” (p. 53).
To be sure, a few theoretical models or theory-based sets of principles have been put to use by
practitioners (e.g., ISD, ADDIE, Gagne’s instructional events model) with varying degrees of success (Zemke &
Rossett, 2002), but as some have noted, practitioners tend to use these models inconsistently and selectively
include certain principles or activities while omitting others viewed by the originating theorists as important
(McDonald, 2006; Wedman & Tessmer, 1993; Zemke, 1985). In this sense, it appears that even instructional
design models with some amount of utility must often be modified or adapted (even radically) by designers to
331
render them applicable in context. As Wilson stated, “It's no wonder ID theories aren't more used, because they
tend to be static and abstract, not fitting the situations very well” (personal communication, November 2, 2006).
While there is clearly a sense of unease about theorizing per se and the use of theories in the field of
instructional design, there is little published research on how practicing instructional designers actually view and
use formal instructional design theories in their everyday work. Neither are there detailed analyses of the impact
of implicit theoretical assumptions on design practices. Examinations of the actual practice of instructional design
have typically focused on the ways that instructional designers spend their time (Cox & Osguthorpe, 2003; Kenny,
Zhang, Schwier, & Campbell, 2005; Roberts, Jackson, Osborne, & Somers Vine, 1994), make decisions in the
design process (Wedman & Tessmer, 1993; Zemke, 1985), and solve instructional design problems (Nelson,
Magliaro, & Sherman, 1988)—often studying novice problem solving (Kerr, 1983) or comparing novices and
expert problem solving strategies (Rowland, 1992). None of these studies provided an in-depth investigation of
practitioner perceptions and use of formal design theories.
One study that comes closer to providing an examination of designer views and uses of theory was
conducted by Christensen and Osguthorpe (2004), who, in a survey of practicing designers, found that certain
well-known theories or models (e.g., Gagne’s instructional events model, Merrill’s component display theory, and
Keller’s ARCS model) were more widely endorsed than others (e.g., layers of necessity, cognitive apprenticeship,
constructivism). But the Christensen and Osguthorpe study did not investigate issues such as the experience of
designers attempting to use theory, the disposition of designers toward formal theories of learning and instruction,
the manner in which designers use various theories for certain tasks, the basis for using certain theories or models
in certain ways, and assumptions about academic theorizing. Thus, a detailed and contextualized understanding of
practitioner views and uses of theories—formal or implicit—is still not available in the literature.
Research into instructional designers’ experiences with, and uses of, theory can aid the discipline not
only by describing how designers apply theories or struggle with those applications, but also by identifying
potential problems with formalized theories of learning, instruction, and evaluation—that is, problems with the
very manner in which formal theories are constructed and disseminated. More specifically, the understanding that
would ensue from such research can lead to better understandings of professional practice, facilitate the
application of formal theories of learning and instruction, provide a deeper understanding of the limits and
shortcomings of formal theories, foster the development of more applicable theory, clarify the role of implicit
theoretical assumptions, and inform the training of instructional designers.
We attempted to help fill this gap in the research literature by exploring how instructional designers
actually view and use theories of all sorts in their work. Our theoretical frame of reference on this project is most
closely related to that of practice theory (e.g., Schatzki, 1996, 2000) and the general hermeneuticphenomenological tradition as developed in the human sciences (e.g., Colazzai, 1978; Giorgi, 1975; Giorgi &
Giorgi, 2003; van Manen, 1990; Packer & Addison, 1989; Westerman, 2006). Research in this tradition aims to
provide thick descriptions of human activity in context and generate rich understandings of peoples’ experiences
in a variety of real-world practices. From this theoretical standpoint, our study had the following four aims:
1. To provide general insight into the lived experience of instructional designers.
2. To provide insight into practitioner views of, and assumptions about, formal theory (its nature, purposes,
value, limitations, weaknesses, need for improvement, etc.).
3. To provide insight into practitioner applications of formal theory (how it tends to be used, how central or
peripheral it is to instructional design in general, how helpful or unhelpful it is for specific applications,
etc.).
4. To provide insight into the relationship between theory and technology (how they might limit or facilitate
the use of one another in various ways).
Although our study is still in progress, this report reflects our methodological framework, our specific inquiry
strategy, and our tentative results based on most of the data that we planned to collect. Future steps involve
interviewing two (or possibly three) more designers, ensuring trustworthiness (e.g., member checking, peer
debriefing, etc.), and further data analysis.
Method
General Inquiry Overview
We employed a qualitative inquiry strategy—informed by phenomenological (Colazzai, 1978; Giorgi,
1975; Giorgi & Giorgi, 2003) and ethnographic (Spradley, 1979) research traditions—that emphasized semistructured interviews and examinations of design artifacts (actual online courses, learning modules, etc., created
332
by the designers we interviewed). Given our methodological framework, we did not assume that our results would
be objective reflections or mappings of participants’ inner experiences or overt behaviors, but rather interpretive
and negotiated accounts, based partly on participants’ involvement in the study (their answers to questions, their
work artifacts) and partly on our involvement (our assumptions, framing of the study, actual questions, ways of
engaging participants in dialogue, data analyses, etc.). In this regard, our results and conclusions constitute a type
of shared understanding between researchers and participants. Given our methodological framework, we have
sought to identify evocative themes and generate insights regarding our topic of inquiry, rather than map an
independent reality (fact gathering) or produce lawful (or statistical) generalizations as in positivist-oriented
research. Inquiry of this sort we conducted strives toward “perspicacity” (Stewart, 1998, p. 47) or “transferability”
(Lincoln & Guba, 1985, p. 124); that is, the formulation of contextual interpretations that may be insightful to
others working in other contexts, without assuming that those interpretations are generalizable in a lawful sense
(proven by the originating researcher).
Participants
We attempted to represent some of the diversity of the field, including designers who are involved in
high-volume production, custom design work, university design work; highly technical training, and corporate
(“soft-skills”) training. The study to this point has included four participants, two females and two males. One
participant worked in a high-volume design organization, two worked in a custom design organization, and one
worked in a university instructional design shop. All participants had advanced degrees in instructional design.
Interview Procedure
For each participant, we conducted three semi-structured interviews. The interviews were always
conducted on separate days. The duration of the interviews varied considerably; some lasted about a half hour,
others lasted over an hour. Generally speaking, interview length varied according to participants’ willingness to
speak in-depth about their work and the manner in which they use various conceptual tools such as theories,
models, and design principles to facilitate their problem solving and decision making.
In the first interview (for each participant) we queried into participants’ training, everyday work
experiences, and practical involvement in the design process. We asked questions such as: “Why did you enter the
field?” “How did you become trained as a designer?” “How long have you been doing instructional design and in
what settings? “Tell me about a recent project you worked on. Describe the experience from beginning to end.
Please help me understand why you did it that way.” The purpose of this interview was to gain a general
understanding of the designer’s lived work experience and a sense of the context in which they work.
In the second interview we asked specific questions about the participants’ uses and views of conceptual
tools such as theories, models, and principles. We invited participants to discuss a specific project in this regard,
by making the following request: “Tell me about this course, including why it was made, audience, other
stakeholders, situational constraints, etc. How indicative is this of your work as a designer?” We then tried to
connect this project with the participant’s practical involvement with theory, by asking questions such as: “Tell
me why you designed the course this way. What procedures, processes, strategies, theories, principles, etc. did
you use? Why this combination of features? What guided your decision making?” Where possible, we tried to see
how participants applied a given theory or principle by asking them to show us evidence by examining the
artifact(s) with the participant. To further investigate participants’ views and uses of theory, we asked questions
such as, “What do you think of theory in general?” “What guides the selection of theories for certain tasks?”
“Could formal theory be improved in some way?” “Does technology ever constrain how you design or use
theory?” “Could your training with theory have been more helpful?”
In the third interview we followed up on unresolved issues and queried into interesting themes that
emerged in the prior two interviews. We also gave participants an opportunity to make comments about the prior
interviews, add anything else they wanted to, and respond to some of the themes that we had tentatively identified
in the prior interviews (across all participants). Our aim in the third interview was to ensure that we had obtained a
rich description of participants’ work experience and views.
Data Analysis Procedure
As in many qualitative studies, our data analysis procedures (drawn from hermeneutic-phenomenological
and ethnographic research methods; Giorgi, 1975; Giorgi & Giorgi, 2003; Packer & Addison, 1989; Spradley,
333
1979; van Manen, 1990) principally involved the identification of themes present in the responses provided by
participants. We primarily employed the ethnographic data analysis approach outlined by Spradley (1979),
supplemented where necessary by techniques drawn from Giorgi (1975) and van Manen (1990). Spradley’s
approach entails the iterative thematization of participant experiences through “domain,” “taxonomic,”
“componential,” and “theme” analyses. These analyses produce a progressively refined view of the participants’
experiences, as (a) general concerns and issues are identified and analyzed (domain); (b) meaningful patterns and
hierarchies of concepts associated with those concerns and issues are formulated (taxonomic); (c) patterns and
concepts are compared and contrasted (componential); and (d) general themes are identified, based on the other
three levels of analysis (theme). Throughout the data collection and data analysis procedures we sought to produce
trustworthy (i.e., “valid”) results by utilizing well-accepted qualitative procedures designed to ensure that data are
treated as fairly as possible. These procedures include techniques such as prolonged engagement, persistent
observation, triangulation, peer debriefing, negative case analysis, and maintaining an audit trail (see Lincoln &
Guba, 1985, for more on trustworthiness in qualitative research). As we finalize our data, we also plan to engage
in additional member checks and progressive subjectivity checks as well as a transferability and dependability
audit.
Results
Preliminary findings tend to cluster around four themes: practitioners’ reflections on their training in
formal theory, the role of formal theory in practice, practical considerations that impact the extent of formal theory
use, and some common aspects of formal theory that tend to limit its application. Each theme is briefly discussed
below. Participants had little to say about the interplay between technology and theory in their work, so we do not
discuss that issue in this report.
Theory Training
All participants agreed that their training in formal theory was valuable. At a fundamental level, it
provided them entry into the field of instructional design as a professional and, now that they are in the field,
credibility when working with clients. All agreed that their training offered them useful frameworks with which to
think about their work and helpful approaches to use when designing instruction. However, there was a general
sense among participants that their theoretical training had lacked sufficient practical application of theoretical
knowledge. They felt that the theories written by the theorists lacked sufficient practical examples of their
applications, that the professors teaching the theories did not fill in these gaps sufficiently, and, most of all, that
they, as students, did not have enough experiences applying theory with real world clients on real world projects
while still under the guidance of their professors. These deficiencies in their training, they felt, made it difficult to
transition to professional work and, one suggested, is one of the reasons that theory is not applied as much as it
might otherwise be. She said that some of her classmates had failed to thrive as practitioners because they simply
couldn’t understand how to apply what they had learned effectively to their day-to-day work.
Role of Theory in Practice
These practitioners reported that, to produce good instructional products, a given instructional designer
needed a combination of innate talent, practical experience in the field, the ability to apply formal theory to their
work, and other key skills (skills such as writing, editing, communication, message design, multimedia design,
project management, etc.). While listing the application of theory among the factors for success, most
practitioners also told us that they really only reference formal theory when defending a design decision that they
had already made to someone questioning their approach. Most of the time, they said that they rely on “informed
intuition” to make design decisions. That is, when considering a design decision, they tended to do what they feel
would be best for the situation, trusting that their instinct appropriately reflects their theoretical training and
experience. When faced with a particularly perplexing problem, these practitioners primarily referenced peers or
mentors for help. Two of them also consulted non-scholarly articles from trade magazines and websites, or book
chapters summarizing research findings. One actually referenced books or articles written by theorists, but did this
only rarely.
334
Practical Considerations
The extent of the use of theory is impacted by the practical values within the culture of the workplace.
The main pressure these instructional design practitioners experience is making the client happy. This usually
requires delivering a reasonably competent product on time and on budget. Generally, clients fund little or no
formal evaluation of learning outcomes and may or may not share any internal assessment data they collect
afterwards with the designers. Few clients will fund the development of learner assessment beyond fairly simple
practice and recall activities embedded in the instruction. And not every client even measures whether the end
user is satisfied with the product. Clients tend to want measures that confirm that a learner has been through the
content, not whether or not the learner has mastered the content.
The practitioners we interviewed, however, felt a strong personal and professional obligation to advocate
for features and approaches that, in the designers’ view, would most benefit the learners’ ability to understand and
apply the content; and the instructional designers would actively try to influence business decisions to accomplish
this goal, with varying degrees of success. Unfortunately, their direct contact with the learners and ability to
conduct relevant needs analyses was almost always limited or non-existent (there were a few notable exceptions).
Generally, the client represented the learners’ needs, abilities, and disposition to the instructional designers. This
lack of contact, then, limited the instructional designers’ ability to really know if their design approach was going
to be successful until after the fact, and only then if data was collected on satisfaction and/or outcomes. In some
exceptions, a pilot of the materials would be used early on to assess learner reactions. This always led to what the
designers considered to be significant design improvements.
Under these kinds of working conditions, instructional designers have little incentive to consider the
theoretical soundness of their work and none of them listed theoretical soundness as an indicator of success. Most
designers felt that if they had managed go beyond the minimal requirement of making the client happy to actually
making the client’s learners happy with their learning experience, then they had achieved all that could really be
expected of them. The actual measurement of high-fidelity, higher order learning outcomes and whether these
translated into performance improvement was usually beyond the scope of their work and influence.
Limitations of Theory
From these practitioners’ perspective, even when they would like to apply theory, theories tend to be too
abstract to apply easily to their work. Practitioners feel that they personally bear the full burden of translating it
from the academy to the workplace, and are not always able to make the necessary connections. Second, most felt
that a given theory tends to have a narrower range of application than the theorist might suggest, and consequently
were reluctant to fully buy in to the theorists’ claims. All of the interviewed practitioners saw themselves as
eclectic in their practice, drawing upon useful ideas from theoretical works they had encountered in their training
as needed, with no particular allegiance to a given theorist or approach. If something seemed like it would work in
a given situation, it was used; if the situation changed, so might their theoretical approach. Additionally,
practitioners felt that theories lacked awareness of, flexibility toward, and strategies to address common
workplace constraints such as those of limited time, limited budget, limited access to learners, limited opportunity
to iterate designs before delivery or revise after delivery, lack of control over the kinds of assessments
administered to learners, etc.
Discussion
Our preliminary analysis does not allow us to draw final conclusions; however, trends in the data point to
some potential insights for understanding practitioners’ daily practice and their use of theory. While expecting
some constraints in everyday practice, we were surprised at the very sparse attention given to theory by our
respondents. Strongly associated with initial training, respondents gave little indication of attempts to keep up
with theoretical changes in the field, or follow theoretical discussion in the literature. Rather, theory was a fairly
static entity – something taught in graduate school, which only partly fit the realities of everyday practice.
Participants might reference theory when challenged on a decision or practice. We also were intrigued to see how
theory was sometimes personified in the form of a professor or mentor seen as embodying a particular theoretical
stance. Such a mentor might be consulted on an intractable problem, and in this way theory would continue to
inform practice.
335
We offer a few suggestions for responding to these early findings. First, it may be helpful to practitioners
if theoretical training incorporated more practical examples of theory use as well as provided more opportunities
for students to work in real work settings, applying theory with the help of professors and mentors. Second, we
need to better understand the “informed intuition” that appears to guide instructional decision-making in practice
to better identify the key influences behind these processes that govern real world design and how theoretical
training and expertise influences those processes. Third, theories may be more useful to practitioners if the theorist
provided more practical examples if its use and addressed how best to adjust the application of the theory to
typical real world constraints. Finally, current practical values in the work environments for many instructional
designers limit their ability to fully understand the needs of learners at the beginning of a project, and similar
factors prevent the collection and use of rich learning outcome data at the end of the project. Advocates of
professional practice may want to determine how best to influence the work culture of instructional design to
encourage better grounding in learner needs and outcomes. This grounding seems to be a core value of the
instructional-design profession, and could more clearly differentiate a professional standard from practices
commonly observed in the field.
References
Bucciarelli, L. L. (1994). Designing engineers. Cambridge, MA: MIT Press.
Burgess-Limerick, R., Abernathy, B., & Limerick, B. (1994). Identification of underlying assumptions is an integral
part of research: An example from motor control. Theory and Psychology, 4, 139-146.
Christensen, T. K., & Osguthorpe, R. T. (2004). How do instructional design practitioners make instructionalstrategy decisions? Performance Improvement Quarterly, 17 (3), 45-65.
Colazzai, P. F. (1978). Psychological research as the phenomenologist views it. In R. S. Valle & M. King (Eds.),
Existential-phenomenological alternatives for psychology (pp. 48-71). New York: Oxford University Press.
Cox, S., & Osguthorpe, R. T. (2003). How do instructional design professionals spend their time? TechTrends, 47
(3), 29, 45-47.
Driscoll, M. P. (2000). Psychology of learning for instruction (2nd ed.). Boston: Allyn and Bacon.
Gagné, R. E. (1977). The conditions of learning (3rd ed.). New York: Holt, Rinehart, and Winston.
Gibbons, A. S. (2003). What and how do designers design? A theory of design structure. TechTrends, 47 (5), 22-27.
Giorgi, A. (1975). An application of the phenomenological method in psychology. In A.
Giorgi, C. T. Fischer, & E. L. Murray (Eds.), Duquesne studies in phenomenological psychology: Volume 2 (pp. 82103). Pittsburgh: Dusquesne University press.
Giorgi, A. P., & Giorgi, B. M. (2003). The descriptive phenomenological psychological method. In P. M. Camic, J.
E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in
methodology and design (pp. 243-273).
Glaser, R. (Ed.) (2000). Advances in instructional psychology (vol. 5): Educational design and cognitive science.
Mahwah, NJ: Lawrence Erlbaum Associates.
Hesse, M. (1980). Revolutions and reconstructions in the philosophy of science. Bloomington: Indiana University
Press.
Kenny, R. F., Zhang, Z., Schwier, R. A., & Campbel, K. (2005). A review of what instructional designers do:
Questions answered and questions not asked. Canadian Journal of Learning and Technology, 31, 70-93.
Kerr, S. T. (1983). Inside the black box: Making design decisions for instruction. British Journal of Educational
Technology, 14, 45-58.
Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
McDonald, J. (2006). Technology I, II, and III: Criteria for understanding and improving
the practice of instructional technology. Unpublished doctoral dissertation, Brigham Young University,
Provo, UT.
Nelson, W. A., Magliaro, S., & Sherman, T. M. (1988). The intellectual content of
instructional design. Journal of Instructional Development, 11 (1), 29-35.
Packer, M., & Addison, R. B. (Eds.) (1989). Entering the circle: Hermeneutic
investigation in psychology. Albany: SUNY Press.
Perez, R. D., & Emery, C. D. (1995). Designer thinking: How novices and experts think
about instructional design. Performance Improvement Quarterly, 8 (3), 80-95.
Reigeluth, C. M. (Ed.) (1983). Instructional-design theories and models: An overview of
their current status. Hillsdale, NY: Lawrence Erlbaum Associates.
336
Reigeluth, C. M. (1997). Instructional theory, practitioner needs, and new directions:
Some reflections. Educational Technology, 37 (1), 42-47.
Reigeluth, C. M. (Ed.) (1999). Instructional-design theories and models (vol. 2): A new
paradigm of instructional theory. Mahwah, NJ: Lawrence Erlbaum Associates.
Roberts, D. W., Jackson, J., Osborne, J., & Somers Vine, A. (1994). Attitudes and
perceptions of academic authors to the preparation of distance education materials at the University of
Tasmania. Distance Education, 15 (1), 70-93.
Rowland, G. (1992). What do designers actually do? An initial investigation of expert
practice. Performance Improvement Quarterly, 5 (2), 65-86.
Schatzki, T. R. (1996). Social practices: A Wittgensteinian approach to human activity
and the social. New York: Cambridge University Press.
Schatzki, T. R. (2000). Practice turn in contemporary theory. Florence, KY: Routledge.
Schön, D. A. (1983). The reflective practitioner: How professional think in action. New
York: Basic Books.
Seels, B. (1997). Taxonomic issues and the development of theory in instructional
technology. Educational Technology, 37 (1), 12-21.
Slife, B. D. (1987). The perils of eclecticism as a therapeutic orientation. Theoretical and
Philosophical Psychology, 7, 94-103.
Snelbecker, G. E. (1974). Learning theory, instructional theory, and psychoeducational
design. New York: McGraw-Hill.
Spradley, J. P. (1979). The ethnographic interview. New York: Holt, Rinehart, &
Winston.
Stewart, A. (1998). The ethnographer’s method: Qualitative research methods series no. 46. Thousand
Oaks, CA: Sage
van Manen, M. (1990). Researching lived experience: Human science for an action
sensitive pedagogy. Albany, NY: SUNY Press.
Vincenti, W. (1990). What engineers know and how they know it: Analytical studies
from aeronautical history. Baltimore: Johns Hopkins University Press.
Wedman, J., & Tessmer, M. (1993). Instructional designers’ decisions and priorities: A survey of design practice.
Performance Improvement Quarterly, 6 (2), 43-57.
Westerman, M. A. (2006). Westerman, M. A. (2006). Quantitative research as an interpretive enterprise: The mostly
unacknowledged role of interpretation in research efforts and suggestions for explicitly interpretive
quantitative investigations. New Ideas in Psychology, 24(3), 189-211.
Williams, D. D. (2006). Educators as inquirers: Using qualitative inquiry. Web-based book retrieved Nov. 6, 2006,
from http://webpub.byu.net/ddw/qualitativebook/.
Wilson, B. G. (1997). Thoughts on theory in educational technology. Educational Technology, 37 (1), 22-27.
Yanchar, S. C. & Slife, B. D. (2004). Teaching critical thinking by examining assumptions. Teaching of Psychology,
31, 85-90.
Yanchar, S. C., & Williams, D. D. (2006). Reconsidering the compatibility thesis and eclecticism: Five proposed
guidelines for method use. Educational Researcher, 35(9), 3-12.
Zemke, R. (1985). The system’s approach: A nice theory but…. Training, October, 103-108.
Zemke, R., & Rossett, A. (2002). A hard look at ISD. Training, 39 (2), 26-34.
337