Theory As a Case of Design: Lessons for Design from the

Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
Theory as a Case of Design:
Lessons for Design from the Philosophy of Science
Allen S. Lee
Virginia Commonwealth University
[email protected]
Abstract
Theories and designs are similar. Because they
are similar, design researchers in information systems
can learn much from the philosophy of science. Neither
large data sets nor papers that generate and
statistically test theory are necessary for the
publication of contributions to the field. Moreover, the
philosophy of pragmatism provides a more solid base
than logical positivism from which to launch research
about design. Future researchers may focus on two
potentially fruitful issues: how designers evaluate a
design prior to implementation, and how the
automation of design in closed loop systems affect our
understanding of both creativity and technology.
1. Introduction
The academic discipline of information systems
(IS) is enjoying a renaissance of activity in the area of
design. There are IS scholars who do design research
and those who reflect on it philosophically. A novel
and revealing perspective, especially for the latter
group, would be one that regards the natural sciences
themselves as forms of design, where what they design
is theory. In this light, lessons about theory from the
philosophy, history, and sociology of science
(henceforth, simply the philosophy of science) can be
read as lessons about design. In the perspective in
which theory is seen as a case of design, a principle
that holds for design in general also holds for theory,
and any principle that does not hold for theory also
does not hold for design in general.
Of the many approaches to science, the
predominant scientific approach taken by the IS
discipline in the United States and Canada has been
one that follows a model based on the natural sciences.
The natural sciences have been quite successful, but
not all models of them have been successful. It is
important to distinguish the natural sciences from any
model offered to explain them – in the same way that a
Jeffrey V. Nickerson
Stevens Institute of Technology
[email protected]
territory is different from any map of it [1]. The
particular natural-science model to which the IS
discipline has subscribed comes from logical
positivism [2]. Significantly, logical positivism is a
school of thought that the philosophy of science
created, but subsequently abandoned [3, 4]. Schön [5]
quotes the following from Bernstein [6]:
There is not a single major thesis advanced by
either nineteenth-century Positivists or the
Vienna Circle that has not been devastatingly
criticized when measured by the Positivists’
own standards for philosophical argument.
The original formulations of the analyticsynthetic dichotomy and the verifiability
criterion of meaning have been abandoned. It
has been effectively shown that the Positivists’
understanding of the natural sciences and the
formal disciplines is grossly oversimplified.
Whatever one’s final judgment about the
current disputes in the post-empiricist
philosophy and history of science … there is
rational agreement about the inadequacy of
the original Positivist understanding of
science, knowledge and meaning.
Despite this, researchers, editors, and reviewers in
North American-dominated journals and conferences
in the IS discipline have adhered to major aspects of
positivism, including its oversimplified natural-science
model of social science, and have never embraced
alternative approaches as they have embraced
positivism. All in all, this has not been beneficial to the
IS discipline and there is no reason for design research
to follow the same path. What are some lessons
offered by the philosophy of science that can benefit
the burgeoning of design research in the IS discipline?
We select three lessons that are particularly
relevant to design research. We present them as
actionable guidelines not only for design researchers,
but also for editors and reviewers who assess design
research.
978-0-7695-3869-3/10 $26.00 © 2010 IEEE
1
Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
2. The data lesson
Perhaps the most fundamental but also the most
significant lesson from the philosophy of science is
about the amount of data that needs to be collected.
This lesson has two parts. First, a larger sample size
may very well be helpful in the task of statistical
hypothesis testing, but for the very different task of
theory testing, a larger number of observations
supportive of a theory do not provide any better
assurance that the theory is true [7, 8]. Second, in the
task of theory testing, a single observation
contradicting a theory is logically sufficient to reject it;
this is a straightforward application of the logic of
modus tollens (examined in greater detail below, in the
discussion on pragmatism), whose origin is in ancient
Greek philosophy (not Karl Popper) and whose validity
has never been disputed. Followers of the positivist
model of science have erred by ignoring this lesson.
However, if design researchers were to heed this
lesson, they would be freed from the supposition that
large quantities of data are always necessary and
beneficial and they could focus instead on collecting
just a sufficient amount of data for a specific purpose.
The rationale for this lesson follows.
A theory, in the natural sciences, is instantiated in
an experiment.
Based on observations of the
experiment’s results, an assessment of the theory is
made. The theory is either “rejected as true” or “not
rejected as true.” However, a theory that is “not
rejected as true” is not the same as a theory that is
accepted as true. Furthermore, no matter how many
instances are observed in which a theory holds, they
may never serve as justification for accepting the
theory as true, lest the fallacy of affirming the
consequent be committed. And while observations
may never establish a theory to be true, the logic of
modus tollens tells us that a single observation
contradicting a theory is logically sufficient to reject it.
By analogy to natural-science research, we apply
this rationale to design research as follows. A design,
in design research, is instantiated in an artifact. Based
on observations of the artifact’s performance, an
assessment of the design is made. The design is
assessed either “to work” or “not to work.” However,
a design that is assessed “to work” is not the same as a
design that is proven. Furthermore, no matter how
many instances are observed in which a design works,
they may never serve as justification for accepting the
design as proven, lest the fallacy of affirming the
consequent be committed. And while observations
may never establish a design as proven, the logic of
modus tollens tells us that a single observation
showing that a design does not work is logically
sufficient to reject it.
Related to this rationale are two major tenets in the
philosophy of science – Hume’s problem of induction
[9] and Goodman’s new riddle of induction [10]. Both
show that statements of particulars can never verify a
general statement. (A “general statement” can be a
theory or a design. “Statements of particulars” can be
statements that describe observations made in a
controlled experiment or in what Hevner, March, Ram,
and Park call “functional testing” and “structural
testing.” [11]) Hume pointed out that induction for
deriving a general statement from statements of
particulars is a procedure that has never been justified
and that, furthermore, any attempt to justify it would
only lead to in an infinite regress of attempts. Hume’s
problem of induction has not been solved:
“Philosophers have responded to the problem of
induction in many different ways… None of the many
suggestions is widely accepted as correct” [12].
Goodman demonstrated that different, contradictory
general statements can always be derived from the
same statements of particulars. Furthermore, if one
reasons inductively, any increase in the number of
statements of particulars would have the dubious effect
of seemingly further strengthening each one of the
contradictory general statements.
An efficient strategy for assessing a theory or
design follows from the lesson that a single
observation is logically sufficient to reject it and, at the
same time, favorable observations, no matter how
numerous, may never verify it as true or proven. The
strategy is for a researcher to eschew the random
collection of large quantities of data and instead assess
the performance of the design or theory under a single,
well specified set of circumstances deliberately sought
out to be challenging to it. The more challenging the
empirical assessment that a theory or design survives,
the more credible the theory or design can be. Thus,
instead of testing in favorable circumstances, the
investigator ideally seeks situations unfavorable to the
occurrence of the anticipated or predicted results, for
these situations can be more informative.
Also worth mentioning is a caveat contained in
what we have termed the “logical sufficiency” of a
single observation to reject a theory or design. Logical
sufficiency is not the same as operational sufficiency.
If a researcher uses a single observation unfavorable to
a theory or design to reject it, the observation must first
be accepted as accurate. If there is any serious doubt
about its accuracy, a second or even third experiment
or artifact also yielding an unfavorable observation
would be helpful. However, even if this were the case,
this would still be a far cry from the requirement in
statistical hypothesis testing that there be, say, a
minimum of 30 observations (actually, data points) in a
sample. Furthermore, the additional observations
2
Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
would not (and, from the perspective of formal logic,
may not) serve the purpose of somehow providing
greater assurance that the theory or design is true, but
would serve the very different purpose of establishing
the accuracy of the observation with which the theory
or design is rejected. Statistical hypothesis testing
(which can address the latter purpose) is not theory
testing (which pertains to the former purpose).
3. The publishing lesson
Another basic lesson of great significance from the
philosophy of science is that a theory, to be considered
worthy of publication, need not be presented with data
testing it and need not be presented in the form of a
single refereed journal article. A case of this is
Einstein’s general theory of relativity.
In 1915, Einstein submitted and presented his
general theory of relativity, in the form of four papers,
to the Prussian Academy of Science [13]. According
to the theory, a massive object bends a ray of light
passing close to it. Thus light that is emitted by a star
and passes close to the sun would have its path
deflected. Such a deflection, however, would hardly
be visible from the earth because of how bright the sun
is. Eddington, an astronomer, solved this problem by
making observations during a solar eclipse.
“Eddington took a team to the island of Principe off
Africa,” where it was known that a solar eclipse would
be visible in 1919. They “were able to make
measurements sufficient to support Einstein's theory.
In particular they were able to rule out Newton's theory
of gravity, which did predict a bending, but only of
half the magnitude” [14]. Thus the data testing
Einstein’s theory was only offered four years after its
publication.
This case also provides two related lessons. The
first one is that both the formulation and testing of a
theory need not be completed by the same
researcher(s). The work of an additional researcher
may very well be required. The second one is that
statistical inference and statistical sampling need not be
present in theory-building/theory-testing research, even
if the research is quantitative. Einstein developed no
statistical model and Eddington performed no
statistical inference. Both would have been puzzled if
asked, “what sample size did you use?”
We apply this lesson and the two related lessons to
design research as follows. The historical case,
concerning Einstein and Eddington and involving
theory as an instance of design, is sufficient to render
invalid, for design research overall, any general
requirements that 1) a design, to be considered worthy
of publication, must be presented with data assessing it
and be presented in the form of a single refereed
journal article, 2) both the formulation and assessing of
a design must be presented in a single, refereed journal
article and be completed by the same researcher(s), and
3) statistical inference and statistical sampling must be
employed. Because the publishing lesson and two
related lessons stem from the natural sciences, any
researcher who considers the natural sciences to
provide the model for how research should be done
must accept this conclusion. These lessons deserve the
attention of editors and reviewers of design-research
submissions to journals and conferences.
4. The pragmatism lesson
The last lesson that we mention is that design
research in information systems should consider
subscribing to the philosophy of pragmatism as an
alternative to the philosophy of logical positivism. As
already mentioned, logical positivism has been
abandoned by the very school of thought that created it
– the philosophy of science. Thus the philosophy of
logical positivism may not serve as a philosophy of
design. The philosophy of pragmatism, however, has
of late become popular in the history of science [15-17]
and has features that make it particularly relevant to
design research.
The Oxford English Dictionary defines
pragmatism as “the doctrine that an idea can be
understood in terms of its practical consequences;
hence, the assessment of the truth or validity of a
concept or hypothesis according to the rightness or
usefulness of its practical consequences” [18].
Compared to logical positivism, it provides a fresh
perspective in at least five ways
First, the knowledge examined by pragmatism
includes not only theories crafted by scientists, but also
knowledge held by people in general, including the
managers, executives, and other practitioners
constituting the audience to whom IS researchers want
their research to be relevant. Such knowledge can be a
belief, idea, concept, plan, decision, policy, design, etc.
Second, as its name implies, pragmatism
emphasizes the practical and the consequential, rather
than the theoretical. This is altogether compatible with
design research.
Third, pragmatism does not presume that the
knowledge developed and approved by university
researchers is primary and, in comparison, the
knowledge used by practitioners is an application of
the former and therefore secondary. Pragmatism
leaves room for what IS researchers and others call
“applied research,” but pragmatism does not confine
the knowledge which practitioners may suitably and
3
Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
successfully use to be only “applied research.”
Pragmatism harbors no preconception that the
knowledge developed and approved by university
researchers is necessarily better than other forms of
knowledge such as the tacit and even subjective
knowledge acquired by experts over years of
experience.
Fourth, pragmatism does not regard the natural
sciences as the model for the social sciences, or the
sciences of the natural [19] as the model for the
sciences of the artificial. Thus, pragmatism frees
researchers from being bound to the natural and social
sciences as the starting point, baseline, or model for
how design research should be performed and
assessed.
And fifth and perhaps most significant, the
practical consequences of interest to pragmatism
include not only truthfulness (e.g., whether the
predictions or other observational consequences of a
scientific theory are upheld by actual observations), but
also the usefulness and moral rightness of a belief,
idea, concept, plan, decision, policy, design, etc.
Pragmatism elevates or restores usefulness and moral
rightness to the same level of importance as
truthfulness. In other words, in IS research informed
by pragmatism, truthfulness (of a theory or any other
concept or belief) does not eclipse usefulness or moral
rightness in importance as a criterion for assessing
whether research is good.
For an example that illustrates pragmatism, we
turn to something that IS researchers are already
familiar with. It is research as performed in the natural
sciences and in those other disciplines that model
themselves on the natural sciences. They can be
viewed as an instantiation of pragmatism, albeit in a
“limiting case” sort of way. The natural and social
sciences are an instantiation of pragmatism in which
design takes the form of theory and the knower is the
scientific researcher working on the theory. For this
person, pragmatism’s “practical consequences” would
be what happens upon the researcher’s application of a
theory in a laboratory, field, organization, population,
or other empirical setting. Then, observations of what
happens (the consequences) would provide justification
for rejecting or not rejecting the theory. And just as
the reasoning in the natural sciences (and also in those
other disciplines that model themselves on the natural
sciences) employs modus tollens, it happens that
reasoning in pragmatism in general also employs
modus tollens.
Modus tollens is the form of the syllogism in
which the major premise is “if p, then q,” the minor
premise is “not q,” and the conclusion is “therefore, not
p.” In the natural sciences and those disciplines that
model themselves on the natural sciences, p is the
theory to be tested and q is a prediction or other
“observational consequence” of the theory when
applied in a particular setting.
In discussing
pragmatism, Hilpinen has stated that, in the reasoning
of pragmatism in general (i.e., not just the limiting case
where it is the reasoning used in the task of testing a
theory in the natural and social sciences), p can be a
proposition describing an “intellectual conception,”
“action,” or “experimental condition,” and q can
describe one or more of p’s “practical consequences,”
such as “an observable phenomenon or a ‘sensible
effect.’ ”[20]. And in subscribing to modus tollens and
being proscribed from committing the fallacy of
affirming the consequent, pragmatism also embraces
the logic, already mentioned, that observations
consistent with a theory (or now, design) may never
verify it as true or proven, but a single observation is
logically sufficient to reject it. Thus, researchers
already familiar with the reasoning used in the natural
sciences or in those disciplines that model themselves
on the natural sciences will find something familiar in
the reasoning used in pragmatism.
There are at least two major benefits for design
research that adopts the philosophy of pragmatism in
place of the philosophy of logical positivism.
One major benefit, already mentioned but worth
emphasizing, is that pragmatism does not restrict p to
be a scientific theory. The proposition p can be any
belief, idea, concept, plan, decision, policy, design, etc.
Likewise, pragmatism does not restrict q to be the
predictions or other observational consequences of a
theory; pragmatism expands the domain of q to include
consequences which follow from the given belief, idea,
concept, plan, decision, policy, or design.
Furthermore, the consequences can be assessed for not
only their truthfulness, but also their usefulness and
moral rightness. In not being as restrictive as logical
positivism, pragmatism enjoys a better fit with design
research and is better suited than logical positivism to
serve as a philosophy of design.
Another major benefit of subscribing to the
philosophy of pragmatism is that, along with recent
philosophy of science, it recognizes that the individual
researcher and the research community are important
for playing constructive and indispensable roles in the
research process. This stands in contrast to logical
positivism, which presumes that a researcher’s values
and social context can only contaminate the subject
matter and bias the research results.
Logical
positivism’s position is that, ideally, a researcher’s
values and social context should be eliminated from
the research process, or at least their influence should
be minimized and “controlled.”
In the philosophy of science, Thomas Kuhn
recognizes that reasoning and data, alone, are
4
Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
insufficient to drive the engine of science and that the
influence of individuals and groups is also required:
Some of the principles deployed in my
explanation of science are irreducibly
sociological, at least at this time.
In
particular, confronted with the problem of
theory-choice, the structure of my response
runs roughly as follows: take a group of the
ablest available people with the most
appropriate motivation; train them in some
science and in the specialties relevant to the
choice at hand; imbue them with the value
system, the ideology, current in their
discipline (and to a great extent in other
scientific fields as well); and finally, let them
make the choice. If that technique does not
account for scientific development as we know
it, then no other will. ([21], emphasis added)
Charles S. Peirce, the founder of pragmatism,
takes a position that foreshadows Kuhn’s position.
This is evident in the following commentary on Peirce:
…in [Peirce’s] theory of truth, one means by
truth of belief that if a certain operation is the
subject of continuous scientific inquiry by the
community of investigators, assent to the
belief would increase and dissent decrease
“in the long run.” ... Peirce's concept of the
community of sign users and inquirers also
has social and moral relevance, for it is
nothing less than the ideal of rational
democracy. ([22], emphasis added)
A ramification for design research is that good
research requires not only reasoning and data, but also
the social infrastructure of a research community.
Doctoral education should not focus on procedures of
reasoning and data to the exclusion of the social
development that doctoral students also need in order
to become full members of the research community.
To speak the language and to practice the customs of
the natives who call themselves “design researchers”
would require the acquisition of much tacit knowledge,
available only through socialization.
A related ramification is that, insofar as pragmatic
knowledge emerges from what is referred to (above) as
a “community of investigators,” pragmatism does not
segregate practitioners and researchers from each
other. How might this be achieved? Professions other
than business provide a model. In law, it is considered
normal and healthy for there to be an interchange of
lawyers between the university setting and the field
setting, where professors become, for instance, full-
fledged judges and practicing lawyers become fullfledged professors.
Furthermore, the research
publications of law schools, “law reviews,” publish
articles written by professors as well as by practicing
lawyers. In medicine and architecture, there is the
same interchange. In each of these three professions,
the result is that the practicing professionals in the field
and the researchers in the university together constitute
an instance of Peirce’s “community of investigators,”
who speak the same language and practice the same
customs. These three professions engage in design,
and we note that the designs that emerge from the
legal, medical, or architectural community of inquirers
are considered both rigorous and relevant.
5. Discussion and final thoughts
We have shown that there is a strong parallel
between design and what positivist researchers have
called theory. Because of this strong parallel, those
engaged in design research can learn three lessons
from the philosophy of science.
First, large data sets, important for certain kinds of
statistical testing, are not necessary for many aspects of
the scientific process. This observation frees design
researchers from a fixation on sample size, and allows
them to look closely at a single phenomenon, just as
many natural scientists do.
Next, theory and testing do not have to appear
concurrently in an article. Physics and the other natural
sciences abound with instances of specialization, with
one set of researchers generating theories and another
set testing them.
This lesson will ring true to information systems
designers: it can be difficult to implement a design for
the purpose of statistically testing it, due to the expense
and time related to software engineering. For example,
a new design of a stock exchange calls for an
enormous development effort. Designers in practice do
not resort to statistical testing as a precondition to the
acceptance of a design, so it makes little sense for the
information systems field to insist that any design be
simultaneously tested statistically as a precondition of
publication.
In the third lesson, we pointed out that naturalscience research is a pragmatic endeavor. Design is
also a pragmatic endeavor: designs are meant to be
used, and they acquire significance through their use.
This third lesson can be applied to this paper itself:
pragmatically speaking, the value of this paper will
stem from its consequences. Thus, we suggest two
specific topics for future design research, one focused
on design by humans, the second on design by
machines.
5
Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
We pointed out that neither scientists nor
engineers are in a position to immediately test their
ideas. If a design cannot be tested statistically at the
time of its origin, what should designers use as criteria
for advancing a design to the implementation stage?
Many researchers think that both scientists and
designers work on a principle of simplicity: this idea
was well-articulated by Occam, and has more modern
instantiations in the philosophy of science [23].
Increasingly, scientists are using Bayesian model
comparison, which provides a computational method
for biasing toward simple versus complex theories [2426]. In design, there is a similar sense of aesthetics
[27]. In software design in particular, it has been
shown that programmers make many judgments on the
basis of a minimalist aesthetic [28-31].
There may be situations in which this aesthetic
leads us to overlook better alternatives. That is, good
ideas can be missed when we make decisions based
purely on simplicity. In the field of biology, it may be a
mistake to assume that minimal theories are the best,
because evolution can easily produce suboptimal
mechanisms in nature [32]. DNA, for example, was
initially ignored because explanations based on
proteins seemed simpler at the time [33]. Recently, a
similar argument has been made in the field of
cognitive science. While Chomsky has claimed that the
brain is an optimized machine for processing language
[34], Marcus has argued that the brain is far from
optimal, a kluge formed by a series of historical
accidents [35]. Indeed, the same way biological
mechanisms are sometimes suboptimal, social
mechanisms may also be suboptimal, and thus, in
modeling institutions, the simplest hypothesis may not
necessarily be the most accurate.
These ideas suggest a research program. Designers
of all stripes are trained to optimize for simplicity. Is
this always for the best? Is there an alternative way to
assess designs prior to implementation? These
questions are potentially productive ones for the design
researcher, and they relate to a foundational question of
the sciences of the artificial: should we design optimal
machines for the task, and let humans adapt to
machines, or should we design machines to adapt to
the sometimes suboptimal characteristics of humans
and institutions? While designers often claim they pay
attention to the user, the technology that permeates a
typical call center is an existence proof of design that
mechanizes human activity.
These questions might be answered both by
analysis of designers in the field, as well as with
experiments in which designers are cued to use
different evaluation heuristics. Techniques for eliciting
and understanding expertise, such as those practiced by
Simon and others [36, 37], might be applied to
information systems design activity.
While this first direction for research focuses on
the criteria used by the human designer, there is
another potential direction, focused on the possibility
of automatic design. In our discussion of pragmatism,
we pointed out that many philosophy of science
researchers agree that theories are judged with respect
to their consequences. Thus, the provenance of a
theory is much less important than the eventual effect
the theory has. In other words, it doesn't really matter
where the theory comes from as long as it has
important consequences.
This is in stark contrast to the opinions of many
academics who believe that theories need to be derived
from previous theories (the “literature”) in order to be
considered worthy of testing. There is a valid reason
for desiring theories to be justified based on
provenance: with a plethora of possible theories, the
community wishes to optimize the available time for
testing and peer reviewing the results. But, in the
modern age, new ways of generating and testing
theories become possible. Peirce argued that theories
are generated in a process called abduction that may
take the form of sudden insight [38]. More recently,
people have attempted to automate the abductive
process [39]. Then, theories can be generated
automatically.
In recent experiments, scientists have shown that
an algorithm can generate random theories, combine
them with other theories, test, and refine the theories,
in an entirely automated closed loop that produces
candidate theories for human evaluation [40, 41]. The
program has been able to find theories that have been
found before by humans, but has also found new
theories, yet to be discovered by humans. Thus, the
provenance of the theory is of little import: in this case,
theories began in a random manner.
In the field of design, there have been parallel
investigations. Methods such as genetic algorithms
have been used to find new forms with functions much
improved over previous humanly generated designs
[42-45]. This work challenges our suppositions about
the roots of creativity: is randomness and combination
as important as expertise? Can we use crowds of
individuals to design through combination [46]? Can
we use crowds of computers? And what of the
aesthetics that humans use: do we discover better or
worse forms if we automate the application of
aesthetics?
Automation has always been a province of
information systems research. Much of this work,
however, has addressed issues of cost reduction and
productivity, avoiding the larger issue of automation's
societal consequences. Simon's vision for the sciences
6
Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
of the artificial is still exciting to read today [19]. But
we have made progress since then in the social
sciences. For example, we understand much more
about identity and embeddedness [47, 48], about the
ways we function in social networks [49], and we are
beginning to understand the role of electronic
mediation in such networks. Design, as with science, is
a process that affects society. Design researchers in the
field of information systems can and should focus on
the possibilities and pitfalls of design automation.
The fruits from the study of human and automated
design processes will be of interest not just to the
information systems community, but to all in the
cognitive and computational communities that are
intent on modeling human cognition and understanding
the roots of individual and social creativity. Thus, by
focusing on design in a pragmatic fashion, information
systems researchers may find a vein of work that
renews the field by addressing timely issues with
societal consequences.
Acknowledgment
This material is based in part upon work supported by
the National Science Foundation under Grant No. IIS0855995.
References
[1] Korzybski, A., Science and Sanity: An Introduction to
Non-Aristotelian Systems and General Semantics,
Institute of General Semantics, 1994.
[2] Ayer, A. J. (ed.), Logical Positivism, New York: Free
Press, 1959, pp. 60-81.
[3] Putnam, H., “Problems with the
Observational/Theoretical Distinction,” in Scientific
Inquiry, Robert Klee (ed.), New York: Oxford
University Press, 1999, pp. 25-29.
[4] Quine, W.V.O., “Two Dogmas of Empiricism,”
Philosophical Review, January 1951.
[5] Schön, D. A., The Reflective Practitioner: How
Professionals Think in Action, New York: Basic Books,
1983.
[6] Bernstein, R. J., The Restructuring of Social and
Political Theory, New York: Harcourt Brace
Jovanovich, 1976, p. 207.
[7] Lee, A.S. and Hubona, G.S., “A Scientific Basis for
Rigor and Relevance in Information Systems Research,”
MIS Quarterly, 33, 2009, pp. 237-262.
[8] Lee, A.S. and Baskerville, R.L., “Generalizing
Generalizability in Information Systems Research,”
Information Systems Research, 14, 2003, pp. 221-243.
[9] Hume, D., An Enquiry Concerning Human
Understanding, P.F. Collier & Son, 1910 [1748]
[10] Goodman, N., Fact, Fiction, and Forecast, Harvard
University Press, 1955.
[11] Hevner, A.R., March, S.T., Park, J., and Ram, S.,
“Design Science in Information Systems Research,”
MIS Quarterly, 28, 2004, pp. 75-105.
[12] Salmon, W. C., “Problem of Induction,” in The
Cambridge Dictionary of Philosophy, Robert Audi (ed.),
Second Edition, Cambridge University Press, 1999, pp.
745-756.
[13] Salisbury, D., Book review of “Jürgen Renn (ed.): The
Genesis of General Relativity. Sources and
Interpretations,” General Relativity and Gravitation, 41,
2009, pp. 661–668.
[14] Entry for “Eddington, Arthur,” in The Oxford
Companion to Cosmology, Andrew Liddle and Jon
Loveday, Oxford University Press, 2008, Oxford
Reference Online.
[15] Lewis, C. I., Mind and the World Order: Outline of a
Theory of Knowledge, 1929.
[16] Van Frassen, B. C., “The Pragmatics of Explanation,”
American Philosophical Quarterly, 12, 1977, pp. 143150.
[17] Godfrey-Smith, P., Theory and Reality, University of
Chicago, 2003.
[18] “Pragmatism,” Oxford English Dictionary Online.
[19] Simon, H., Sciences of the Artificial, MIT Press, 1969.
[20] Hilpinen, R., “Peirce, Charles Sanders,” in The
Cambridge Dictionary of Philosophy, Robert Audi (ed.),
Second Edition, Cambridge University Press, 1999, p.
652.
[21] Kuhn, T.S., “Reflections on My Critics,” in Criticism
and the Growth of Knowledge, Imre Lakatos and Alan
Musgrave (eds.), Cambridge University Press, 1970, pp.
231-278.
[22] Thayer, H. S., “ Pragmatism,” Encyclopædia Britannica
Online, 2009.
[23] McAllister, J. W., “The Simplicity of Theories: Its
Degree and Form,” Journal for General Philosophy of
Science, 22, 1, 1991, pp. 1-14.
[24] McCay, D., Information Theory, Inference, and
Learning Algorithms, Cambridge University Press,
2003.
[25] Salmon, W. C., “The Status of Prior Probabilities in
Statistical Explanation,” Philosophy of Science, 32, 2,
pp. 137-146.
[26] Salmon, W. C., “Rationality and Objectivity in Science,
or Tom Kuhn Meets Tom Bayes,” in Scientific Theories,
C. W. Savage (ed.), Minnesota Studies in the Philosophy
of Science, University of Minnesota Press, 14, 1990, pp.
175-204.
[27] Maeda, J., Laws of Simplicity, MIT Press, 2006.
[28] Case, P., and Pineiro, E., “Aesthetics, Performativity
and Resistance in the Narratives of a Computer
Programming Community,” Human Relations, 59, 2006,
pp. 753-782
[29] MacLennan, B. J., “Aesthetic Values in Technology and
Engineering Design,” Philosophy of Technology &
Engineering Sciences, Vol. 9 part V.5 of Handbook of
the Philosophy of Science, D. Gabbay, P. Thagard, and
J. Woods (eds.), Elsevier Science, 2006.
7
Proceedings of the 43rd Hawaii International Conference on System Sciences - 2010
[30] Chopra, S. and Dexters, S., “Free Software and the
Aesthetics of Code,” in S. Chopra and S. Dexter (eds.),
Decoding Liberation, Routledge, 2007.
[31] Cunningham, D. W., Meyer, G., and Neumann, L.,
Computational Aesthetics 2007, A K Peters LTD, 2008.
[32] Crick, F., What Mad Pursuit: A Personal View of
Scientific Discovery, New York: Basic Books, 1990.
[33] Gernert, D., “Ockham's Razor and Its Improper Use,”
Journal of Scientific Exploration, 21, 1, 2007, pp. 135140.
[34] Chomsky, N., The Minimalist Program, MIT Press,
1995.
[35] Marcus, G., Kluge: The Haphazard Construction of the
Human Mind, New York: Houghton Mifflin, 2008.
[36] Simon, H. A., and Chase, W G., “Skill in Chess,”
American Scientist, 61, 1973, pp. 394-403.
[37] Larkin, J.H., McDermott, J., Simon, D.P, and Simon,
H.A., “Expert and Novice Performance in Solving
Physics Problems,” Science (208), 1980, pp. 1335-1342.
[38] Peirce, C. S., Collected Papers, Cambridge: Belknap
Press of Harvard University Press, 1931.
[39] Fox, R., and Josephson, J. R., “Peirce: A Tool for
Abductive Inference,” ICTAI 1994, pp. 571-577.
[40] Waltz, D., and Buchanan, B. G., “Automating Science,”
Science, 324, 2009, pp. 43-44.
[41] Schmidt, M., and Lipson, H., “Distilling Free-Form
Natural Laws from Experimental Data,” Science, 324,
2009, pp. 81-85.
[42] J. S. Gero and M. L. Maher (eds.), Modeling Creativity
and Knowledge-Based Creative Design, Hillsdale, New
Jersey: Lawrence Erlbaum, 1993.
[43] Gero, J. S. and Louis, S., “Improving Pareto Optimal
Designs Using Genetic Algorithms,” Microcomputers in
Civil Engineering 10, 4, 1995, pp. 241-249
[44] Bentley, P. J. (ed.), Evolutionary Design by Computers,
San Francisco: Morgan Kaufman, 1999.
[45] Saunders, R. and Gero, J.S., “How to Study Artificial
Creativity,” in T. Hewett and T. Kavanagh (eds.),
Creativity and Cognition 2002, New York: ACM Press,
pp. 80-87.
[46] Nickerson, J. V., Zahner, D., Corter, J. E., Tversky, B.,
Yu, L., and Rho, Y., “Matching Mechanisms to
Situations through the Wisdom of the Crowd,”
International Conference on Information Systems, 2009.
[47] White, H. C., Identity and Control: How Social
Formations Emerge, Princeton University Press, 2008.
[48] Granovetter, M., “Economic Action and Social
Structure: The Problem of Embeddedness,” American
Journal of Sociology, 91, 3, 1985, pp. 481-510.
[49] Watts, D.J., “The New Science of Networks,” Annual
Review of Sociology, 30, 2004, pp. 243–270.
8