The Effects of Explicit Instruction in Critical Thinking On Student

ADDIS ABABA UNIVERSITY
COLLEGE OF HUMANITIES, LANGUAGE STUDIES,
JOURNALISM AND COMMUNICATION, GRADUATE
PROGRAMME, DEPARTMENT OF FOREIGN LANGUAGES AND
LITERATURE
THE EFFECTS OF EXPLICIT INSTRUCTION IN CRITICAL
THINKING ON STUDENT ACHIEVEMENT IN WRITING
ACADEMIC PAPERS, GENERAL CRITICAL THINKING ABILITY,
AND CRITICAL THINKING DISPOSITIONS
BY
ADEGE ALEMU
A dissertation submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy in Teaching English as a Foreign
Language (TEFL)
(Addis Ababa University)
April 2016
ADDIS ABABA UNIVERSITY
COLLEGE OF HUMANITIES, LANGUAGE STUDIES, JOURNALISM
AND COMMUNICATION, GRADUATE PROGRAMME, DEPARTMENT OF
FOREIGN LANGUAGES AND LITERATURE
THE EFFECTS OF EXPLICIT INSTRUCTION IN CRITICAL THINKING ON
STUDENT ACHIEVEMENT IN WRITING ACADEMIC PAPERS, GENERAL
CRITICAL THINKING ABILITY, AND CRITICAL THINKING DISPOSITIONS
BY
ADEGE ALEMU
Approved by:
____________________________
Advisor
____________________________
Examiner
____________________________
Examiner
________________________
Signature
_________________________
Signature
__________________________
Signature
DECLARATION
This PhD thesis incorporates original research conducted by the author
and includes no material accepted for any other academic award in any
university. To the best of my knowledge it does not include any material
authorized by another person, except when duly referenced.
CHAPTER I
Introduction
1.1. Background of the Study
The teaching of higher-order cognitive skills, such as critical thinking, is not an entirely new
phenomenon. It is rooted in Greek philosophy, was championed by Dewey in the Post-World-War
I United States, modified by Bloom in the 1950s and became popular in the 1990s. From the time
of Socrates to contemporary concerns about the need for an educated citizenry and quality workforce, the ability to think critically and to reason well has been regarded as an important and
necessary outcome of education (Reed, 1998).
Today, more than ever before, teaching students to think critically and reason well is considered
central to liberal education (Giancarlo and Facione, 2001). Critical thinking is always a process
that involves actively thinking through (a subject, problem, content, etc.,), and evaluating all the
steps in one‟s own thinking process or the thinking process of others (Sims, 2009:3). Critical
thinking, it is also argued, is the intellectually disciplined process of actively and skillfully
conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from,
or generated by, observation, experience, reflection, reasoning, or communication, as a guide to
belief and action (Paul and Scriven, 2004; Sezer, 2008). It is a process that governs practice and
enables the use of cognitive skills or strategies that increase the probability of a desirable outcome
(Halpern, 1996/1998/2007; Baker, Rudd and pomeroy, 2000).
Dewey (1933) pointed out that learning to think is the central purpose of education. The National
Education Goals Panel in the USA, for example, identified the need for a substantial increase in
“the proportion of college graduates who demonstrate an advanced ability to think critically,
communicate effectively, and solve problems” (National Education Goals Panel, 1991). Osborne
(1932:402) also stated that,”…it is assumed that the development of thought power is one of the
major aims of education.” Dressel and Mayhew (1954) believed that educational institutions
were responsible for teaching students to go beyond the simple mental activities of recall and
restatement of ideas and facts to the higher-level skills and habits involved in critical thinking. To
1
some scholars, including Michael Scriven, “training in critical thinking should be the primary task
of education” (1985, p.11).
Recent studies, for instance, Bok(2006), and Elder and Paul(2009) reported that critical thinking
is of utmost importance in the higher education setting with 90% of instructors polled agreeing
that it is the most important component of undergraduate education. A primary objective of
undergraduate education is the development of critical thinking skills (Abrams, 2005). Moreover,
Bok (2006) identifies and describes at length eight broadly-stated purposes for undergraduate
education(i.e., ability to communicate, critical thinking, moral reasoning, preparing citizens,
living with diversity, living in a more global society, developing a breadth of interests, and
preparing for work). Bok (2006) asserts that critical thinking and moral reasoning skills are
foundational to further development of students. According to Bok, critical thinking is a major
learner outcome for the 21st century.
Until very recently, however, the teaching of critical thinking has been teacher-centered, and
was simply viewed as an implicit goal of teaching the contents of a course. It was generally
assumed that students who attended college would develop critical thinking skills by attending
classes, by listening to lectures, and participating in class discussions, and by taking tests and
completing regular course assignments. Most teachers devise their instructional methods based on
the assumption that 1) lecture content can be absorbed with minimal intellectual engagement on
the part of students, 2) students can learn important content without much intellectual work, and
3) memorization is the key to learning, so that students need to store up lots of information (that
they can use later when they need it) (Paul & Elder, 2007).
While this traditional modeling of training students might have served the needs of the university
education for many years, increasingly it has been criticized for not adequately preparing
university graduates to deal with today‟s complex situations. Because, it has socialized students to
receive rather than to search for knowledge, the knowledge transmitted is seen as authoritative –
to be passed on from teachers to students, the assumption is that knowledge and procedures have
to be instilled in students, who are passive recipients in the learning process (Ramseden, 1992).
2
Though research findings on the most effective instructional methods for improving students‟
critical thinking abilities have been inconclusive, several studies have indicated that the traditional
implicit modeling of critical thinking with a few scattered lessons providing critical thinking
practice are not likely to be effective for most students (Reed, 1998; Pescatore, 2007), and
improving students‟ thinking requires more explicit teaching of critical thinking skills (BangertDrowns and Bankert, 1990; Halpern, 1998; Keeley, Browns, & Kreutzer, 1982; Perkins, 1989;
Quellmalz, 1989; Underbakke, Borg, and Peterson, 1993). Teachers can no longer be information
givers and students must learn thinking, and reasoning skills to reach their fullest potential, and
this can be done so explicitly and directly in an integrated manner (Meyers, 1986; Fisher, 2001).
Research findings on the most effective instructional methods for improving students‟ critical
thinking abilities have also noted that critical thinking skills and abilities are unlikely to develop
in the absence of explicit instruction (Abrami et al, 2008; Case, 2005; Facione, 1990; Halpern,
1998; Paul, 1992), and this explicit instruction should also attend to the dispositional or affective
components of critical thinking (Facione, 1990).
Explicit instruction, unlike the implicit model, often involves „some sort of rule being thought
about during the learning process‟ (Dekeyser, 1995). In other words, learners are encouraged to
develop metalinguistic awareness of the rule, that can be achieved deductively (i.e., by providing
the learners with a grammatical description of the rule) or inductively (i.e., by assisting learners to
discover the rule by themselves from the data provided). Explicit instruction, therefore,
necessarily constitute direct intervention (Ellis, 2005). Instruction as direct intervention according
to Ellis involves the pre-emptive specification of what it is that the learners are supposed to learn
and, typically, draws on structural syllabus (p. 173).
Learning academic writing skills in a university in an EFL/ESL context is essential for students
who are attending their undergraduate education (Geremew, 1999), but challenging and takes a
great deal of hard work. Most professors require their students to critique books, term/research
papers, articles, academic essays, films, and formal reports related to the content of their courses.
These activities require students to think critically about how they approach problems, questions,
issues, and affective dispositions such as open-mindedness and diligence in seeking relevant
information, being systematic in analyzing information, and inferencing that can reasonably be
drawn from facts. In the students‟ previous education, “good writing” might have meant
3
mastering the basics of organization, grammar, and spelling. Although these are essential, as
college/university academic writers, students are expected to do more: to write with depth,
insight, and analytical understanding. In order to achieve this level of sophistication in writing,
students need to develop comparably advanced thinking abilities. We can‟t, after all, write better
than we think! (Chaffee, McMahon & Stout, 2002). According to Chaffee et al (2002), students‟
writing abilities can be improved while they develop their critical thinking abilities (p.2). Price
and Hamington (2010:45) have also pointed out that “thinking time” is more important than
“writing time” and it is worth investing in, especially if you jot down notes about your developing
ideas and evaluate them later for accuracy. Although the speculation about the link between
critical thinking and academic writing appears plausible, it should be tested empirically. Research
into the reciprocal relationship between the two concepts will address the gap in the existing
knowledge.
Thus, to examine if instruction in critical thinking help EFL undergraduate students develop and
better able to demonstrate this skill (the skill of critical thinking), develop and perform better in
writing academic papers, and develop positive dispositions toward critical thinking skills, explicit
(rather than the traditional implicit) instruction in critical thinking was, therefore, important to
employ to see the effectiveness of this strategy for building critical thinking skills in the
classroom learning. So far to this researcher‟s experience, it appears that critical thinking has not
been taught explicitly in the context of university (undergraduate) education in Ethiopia. Studies
of the impact of critical thinking on student academic outcomes (Meyers, 1986, p.175-6), for
example, indicate that discussion or some other provision for ensuring that students do some sort
of thinking about what they will write appears to result in more extensive and possibly better
writing than when students are asked to write without an explicit provision for thinking about
ideas. Providing explicit instruction in critical thinking rather than simply viewing critical
thinking as an implicit goal of a course is most crucial (Hove, 2011; Pescatore, 2007; Reed,
1998). Thus, based on this corpus analysis of studies and published academic writing or prose, the
researcher confirmed that explicit instruction in critical thinking was by far the most important
form of instruction to be tested empirically within the contents of academic writing for building
critical thinking skills, and help students develop academic essay writing skills in the classroom.
4
Thinking critically, as an essential part of our higher-order cognitive skills and the most important
component of undergraduate education includes many other sets of skills of our higher-order
mental processes. These include: problem-solving activities, reasoning and reflective thinking,
analysis, synthesis, inference, evaluation and decision making, creative thinking, (fluid)
intelligence, and self-regulation. Thus, every classroom activity with the aim to develop students‟
critical thinking abilities, can lead to an environment that enhances these basic skills in students‟
learning (Norris and Ennis, 1989; Sumner, 1906; Dewey, 1909; Glaser, 1941; Skinner, 1959; Paul
and Elder, 2004 /2006/2008/2009; Lau and Chan, 2004/2012). Though several studies have
recognized that improving students‟ abilities to think critically is crucial for students‟ professional
and personal success, such recognition, however, is confined to only North America, UK, and
some parts of Asian Pacific Countries (Ab Kadir, 2007). Challenging students to think critically
about academic subjects, and to develop the reasoning abilities they need to deal successfully with
real world reasoning tasks in life is rarely practiced in EFL context in Ethiopia. This observation
has, therefore, further motivated this researcher to empirically examine if university
undergraduate students in Ethiopia would be better able to demonstrate critical thinking abilities,
and use these same abilities to perform better academically in their learning, and in everyday
reasoning tasks after having received a semester-long specific critical thinking strategy
instruction.
Today, the ever increasing knowledge and advancing technologies dictate that graduating students
should have the skills and dispositions to keep up-to-date with professional knowledge and
development (Facione, 1990). Some leading critical thinking theorists (Paul, 1993; kennedy,
Fisher & Ennis, 1991; Byrne & Johnston, 1987) believe that the disposition or habit of mind to
think critically is crucial for critical thinking. Despite this recognition, the dispositional aspects of
critical thinking has been under-discussed and under-researched. To some extent, this may be due
to the late development of a suitable instrument for assessing Critical thinking dispositions
(Spicer & Hanks, 1995). While there have been a number of instruments for assessing critical
thinking skills, measurement of critical thinking dispositions was not possible until 1992 when the
California Critical Thinking Disposition Inventory(CCTDI) was first introduced. The importance
of measuring the dispositions of critical thinking has been brought to light by Facione and
Facione (1997), the team that first used the CCTDI to measure nursing students‟ critical thinking
dispositions in the United States. In their extensive study, it was revealed that one group of
5
students (the RN to BSN part-time students) scored significantly lower on the CCTDI at the exit
of the nursing programme, compared with their entry scores. In other words, as these students
progressed through the programme, their enthusiasm for higher order thinking actually
diminished. This was an unexpected finding as it would be reasonable to assume that exposure to
new frameworks of knowledge and clinical judgement would excite rather than dampen their
enthusiasm. One suggestion is that the difficulties of combining work and study might have
caused them to become disillusioned about what the educational process has to offer. Another
researcher of Hong Kong University, Tiwari (1998) conducted a study with Hong Kong students
who also pursue their academic nursing programmes on a part-time basis to investigate if similar
results would be found to these students. In her study, Tiwari found that her students scored
significantly higher at the end (posttest) of the nursing programme with most of the dispositional
aspects. This result is not consistent with the former one (in the United States). If this indeed is
the case, one wonders if similar (as in USA students), or different (from Hong Kong students)
results would be found with other students who pursue their academic EFL writing programmes
(regular class). At this point in time, as far as the present researcher knows, no research data are
available that would allow an analysis of the critical thinking dispositions of university
undergraduate students in Ethiopia. So, in order to understand the pattern of their critical thinking
dispositions before and after a semester long explicit and intensive training in critical thinking
strategies, an empirical test of the local undergraduate students‟ critical thinking dispositions was
therefore crucial.
1.2. Statement of the Problem
Today, the world needs people with qualities of critical thinking to meet the growing challenges
in life. The demands of employment in a global economy, the survival of a democratic way of
life, and personal decision making in a complex and rapidly changing society all require people
who can reason well and make good judgments. As a country moves toward a technology-based
economy, it needs trained people who can face world-wide competition, meet employers
demands, think flexibly and analytically, integrate information from a variety of sources and
perspectives, and make profitable decisions effectively and efficiently. Psychologists,
philosophers and educators (for example, Goodlad and McMannon, 1997; Halpern, 1998/2007;
Hunt, 1995; King, 1994) argue that making sound personal and civic decisions requires the ability
to evaluate, analyze, interpret and synthesize accurately information from different sources, and
6
for students, workers, and citizens, critical thinking is an essential tool for performing
successfully in a complex and rapidly changing world. In each of these roles, as David Perkins
(1989) points out, we must
examine the factors impinging on a situation, forecast the outcomes possible
courses of action, evaluate those outcomes and weigh them relative to one another,
and try to choose so as
to maximize positive outcomes and minimize negative ones.
Further, the beliefs we hold, and consequently the inferences we later make and
attitudes we later assume, depend in part on our reasoning about the grounds for those
beliefs. Accepting beliefs wisely serves the ultimate end of later sound conduct as well as
the more immediate sound beliefs itself (in Reed, 1998, p. 2)
Developing the ability to think critically among undergraduates is an essential life skill and has
gained extraordinary attention over the past two decades. Critical thinking can be developed
among undergraduates (Halpern, 1998; Pascarella, 1999; Tsui, 2002), especially if critical
thinking instruction and student practice is embedded across the curriculum (Kelly-Riley et al.,
2007; Mesher, 2007). School authorities and researchers in Ethiopia (Atkins, Gebremedhin and
HaileMichael, 1996; Geremew, 1999), for instance, also acknowledge the importance of
promoting students‟ critical thinking abilities and skills at universities.
Despite the widespread expressions of concern about developing critical thinkers, observations
and existing empirical studies have shown that most schools, colleges and universities are neither
challenging students to think critically about academic subjects nor helping them develop the
reasoning abilities needed to deal successfully with the complexities of modern life. Though
active learning methods or student-centered teaching (that cause students to think about what they
do) would govern educational practices in schools and universities (Ministry of Education, 1994),
our educational system continues to provide students with the traditional model of instruction
(Dawit, 2008), and to graduate students who do not reason well. The faulty every day reasoning
and poor argumentation skills used by most students (both orally and in writing) indicates that
even a college/university education appears to have a limited effect on graduates‟ critical thinking
abilities, including making reasonable interpretations of texts and formulating unbiased and wellreasoned arguments.
7
In a preliminary study (unpublished) conducted by this researcher on “Critical Thinking
Pedagogy in EFL classrooms at Jimma University”(2009), although a large majority of
instructors(71%) stated that critical thinking is an important goal of their instructional objectives
and/or practices, only 2% of the total number of EFL teachers (N=33) of the university bring
explicit modeling of critical thinking in their classroom instruction, and 5% bring critical thinking
assessment into their assignments and examinations. The main reasons for this shortcoming,
according to the teacher respondents are: (a) the teachers are not trained in critical thinking when
they were at their university education and they do not even know what is meant by critical
thinking, (b) there are little or no standard textbooks and/or reference books available on critical
thinking, (c) the teachers have also difficulty to use critical thinking in their instruction, and (d)
the teachers have no time and instructional resources to integrate critical thinking skills into their
classroom pedagogy, (e) few teachers teach implicitly(indirectly, in the form of pre-and-postinstruction question) rather than explicitly(or directly in focused instruction). These shortcomings
count a lot because critical thinking is highly correlated with students‟ achievements (see the
results of the present study). Unfortunately, this finding indicates that while concern about critical
thinking is widespread (71%, see above), explicit instruction in critical thinking is not still
occurring on a broad scale. “Everyone agrees that students learn in college, but whether they learn
to think is more controversial” (Mckeachie cited in Joscelyn, 1988).
As is the case in many other countries, the education policy introduced in Ethiopia vividly
depicted that the pedagogical implications of constructivism – active learning methods or
student-centered teaching, would govern instructional practices in schools (Ministry of Education,
1994). Existing empirical evidence, however, shows that education in Ethiopia is still
characterized by traditional teaching methods (Dawit, 2008, p. 59). Unfortunately, the traditional
model of education has socialized students to receive rather than to search for knowledge. In this
mode of teaching, the knowledge transmitted is seen as authoritative, to be passed on from
teachers to students. The assumption is that knowledge and procedures have to be instilled in
students, who are passive recipients in the learning process (Ramsden, 1992). This method also
reinforces the importance of providing the traditional implicit model of critical thinking
instruction and simply viewing critical thinking as an implicit goal of a course (Reed, 1998;
Pescatore, 2007).
8
Moreover, although graduating students who think critically, compete in a modern, global
economy was determined to be a goal for Ethiopian education by the „National Educational
Goals‟ (Ministry of Education, 1994), the college education system appears to have a limited
effect on undergraduate students‟ critical thinking abilities, including making reasonable
interpretations of texts and formulating well-reasoned arguments and decisions in-and-outside the
classroom. Most Educators have become so overly focused on teaching contents through lecture
(the traditional form of instruction). According to Limbach & Duron (2006), however, it is very
difficult to increase a student‟s critical thinking skills with the lecture format. Topics are
discussed sequentially rather than critically, and students tend to memorize the material since the
lecture method facilitates the delivery of large amounts of information (p.160). The implicit
modeling of critical thinking with a few scattered lessons providing critical thinking practice are
not likely to be effective for most students (Reed, 1998; Pescatore, 2007). According to Hove
(2011), in order to better prepare our students for the challenges they will face, universities need
to explicitly teach critical thinking techniques, equipping young people with twenty-first century
skills. The most essential implications of these and other studies may be the importance of
recognizing the need for explicit, scaffolded, and intense training for critical thinking.
Thus, taking into account the situation discussed above, this study was an attempt to review
related literature to identify effective teaching and learning strategies to investigate the extent to
which training in these critical thinking strategies improves students‟ critical thinking skills, and
whether such skills, once developed, would lead students toward desired performance in writing
academic essays, and dispositions toward using critical thinking. Mendelman (2007) asserted that
“critical thinking should be taught in virtually every course in the humanities (p.300). In this
study, however, attention would be given to academic EFL writing skills course. This was
because writing for academic purposes is the skill needed by all disciplines of undergraduate and
even graduate students of all departments across the universities in the country, and „academic
writing‟ is an area where EFL/ESL undergraduate and even graduate students feel the most
difficult course and non-native speaking students experience a great deal of difficulty in their
studies at the college and university level (Hinkel, 2002; Johns, 1997; Johnson, 1989a; Jordan,
1997; Leki & Carson, 1997; Prior, 1998; Santos, 1988 in Hinkel, 2004) even in English-speaking
countries. Moreover, studies carried out on Addis Ababa university students‟ academic writing
skills (Geremew, 1999; Haregewein, 2008), for instance, have shown that many students fail to
9
meet the standards of „writing‟ proficiency expected of them by their instructors at this higher
education level.
So far, to this researcher‟s knowledge, no research of this sort has been conducted in the context
of Ethiopia. Some studies related to the teaching of EFL academic writing in Ethiopia (for
example, Geremew, 1999; Italo, 1999; Dawit, 2003; Tesfaye, 1991; Mesfin, 2004; Haregewoin,
2008; Alamirew, 2005) have been undertaken. However, these researchers were not specifically
concerned with the „„Effects of explicit instruction in critical thinking techniques on
undergraduate Students‟ abilities to think critically (about everyday issues), learning to write
thoughtfully academic essays, and dispositions toward thinking critically‟‟.
This study, therefore, was an attempt to add to the knowledge of how explicit instruction in
critical thinking results in improved performance in student‟s critical thinking ability, and how
this ability to think critically would, in turn, help achieve better academically in writing essays,
and dispositions in an EFL environment by assessing the effectiveness of integrating explicit
instruction in critical thinking techniques into undergraduate Academic Writing skills
course(content). The critical thinking teaching techniques suggested by the Delphi Study Report
(Facione, 1990) were particularly chosen as instructional strategies to encourage the development
of critical thinking skills and abilities in this study. The researcher selected these instructional
techniques as it was the most comprehensive of critical thinking strategies available to date. It
included the fundamental steps of understanding information and identifying assumptions and
highlighted the importance of identifying implication for action and determining credibility of
logic by judging the quality of information cited. In addition, the Delphi Study clearly identified
measureable skills (interpretation, analysis, evaluation, inference, explanation, and selfregulation). It also includes affective dispositions (see Chapters II & III, for more details).
Moreover, these instructional techniques were used in this study for they have not previously been
tested empirically to EFL pedagogy in general and teaching „„writing for academic purposes‟‟ in
particular in the context of Ethiopia.
10
1.3 Objectives of the Study
According to current literature (Hove, 2011), for example, „students will be better able to
demonstrate critical thinking and perform better academically after having received specific
critical thinking strategy instruction‟. The purpose of this study was, thus, to examine empirically
the effectiveness of Explicit Critical Thinking Strategy Instruction on university undergraduate
students‟ abilities to think critically about their writing in the disciplines, and about everyday
issues that require them to reason well, and dispositions toward critical thinking in general. The
technique was used to instruct students in analyzing academically written ducuments or source
readings so that students might (1) develop abilities needed to think critically about their
academic writing, for example, evaluating, interpreting and integrating information from different
sources and constructing and arguining a case to explain the evidence, and (2) use those same
abilities for everyday reasoning tasks. Specifically, the study set out to answer the following
questions:
1. Will a group of undergraduate EFL academic writing students who receive explicit instruction
in critical thinking techniques perform better on a test that requires them to analyze, interpret, and
write an academic essay/paper on a set of different topics than students not receiving explicit
instruction in critical thinking techniques?
2. Will a group of undergraduate EFL academic writing students who receive explicit instruction
in critical thinking techniques perform better on a test that requires an evaluation of arguments on
an everyday issue than students not receiving explicit instruction in critical thinking techniques?
3. Will a group of undergraduate EFL academic writing students who receive explicit instruction
in critical thinking techniques differ in their performance in different components of critical
thinking in the Ennis-Weir (general) Critical Thinking Essay Test from students who did not
receive such training in critical thinking techniques?
4. Will a group of undergraduate EFL academic writing students who receive explicit instruction
in critical thinking techniques differ in their dispositions (attitudes) toward the use of critical
11
thinking skills from students who did not receive explicit instruction in critical thinking
techniques?
5. Will there be a positive correlation between achievements in Academic Writing Skills, Critical
Thinking Ability, and Dispositions toward Critical Thinking?
Statement of Hypotheses
Based on the above research questions, the following null hypotheses were formulated to test:
H01: There will be no significant difference in the mean academic writing skills scores as
measured by Instructor/Researcher developed academic writing essay test between students who
received explicit instruction in critical thinking techniques and students who did not receive
explicit training in critical thinking techniques (CTT).
H02: There will be no significant difference in the mean critical thinking abilities scores as
measured by the Ennis-Weir Critical Thinking Essay Test between students who received explicit
instruction in critical thinking techniques and students who did not receive explicit instruction in
critical thinking techniques (CTT).
H03: There will be no significant difference in the mean critical thinking performance scores in
different components of critical thinking in the Ennis-Weir Critical Thinking Essay Test between
students who received explicit instruction in critical thinking techniques and students who did not
receive explicit instruction in critical thinking techniques (CTT).
H04: There will be no significant difference in the mean critical thinking disposition scores as
measured by the California Critical Thinking Disposition Inventory (CCTDI) between students
who received explicit instruction in critical thinking techniques and students who did not receive
training in critical thinking techniques (CTT).
H05: There will be no positive correlations between achievements in Academic Writing Skills,
Critical Thinking Ability, and Dispositions toward Critical Thinking.
12
1.4. Significance of the Study
The absence of local and empirical data related to students‟ critical thinking and its approach to
language education indicates the need for this study. To this effect, it can be assumed that this
study might have some useful information about the teaching/learning of Critical Thinking and
EFL in general and academic EFL writing skills in particular. The following are some of the ways
in which this study is significant:
First, while research on students‟ critical thinking has been well underway in other parts of the
world such as the United States and Europe, studies into the undergraduate students‟ critical
thinking in Ethiopia are virtually non-existent. This study could, therefore, provide impetus for
the teaching profession to pursue a systematic approach to promote a critical thinking pedagogy
in language education.
Secondly, although studies into the critical thinking abilities and dispositions of some cultural
groups have been conducted widely in the US and other countries, for example, Hong Kong
(China), none of these involves our country. Given the speculation about the uniqueness of our
thinking due to our culture and education system from these countries, this is a significant gap in
the understanding of critical thinking. This study seeks to contribute to the body of knowledge
about how our culture and education system might influence similar or different critical thinking
skills and dispositions development.
Finally, even though much research (e.g., EFL speaking and reading skills, History, etc., in other
countries such as USA, Iran, etc) has been conducted on critical thinking approach to learning,
very little is known about its effect on students‟ academic writing abilities. In addition, even in
the area of approaches to learning where learning critical thinking has proved to be effective in
general education, it is uncertain whether such effectiveness will be repeated when critical
thinking strategy instruction is applied specifically to different subject areas. This study provides
language educators and curriculum designers with such an understanding that critical thinking
could be the basis for designing programs and course materials that promote students‟ learning
and using critical thinking abilities for transfer across domains.
13
1.5. Operational Definitions of Terms
As critical thinking, critical thinking disposition, argument, etc can be interpreted in a number of
ways depending on one‟s purpose, there is a need to clarify their operational definitions as used in
this study. The following terms are, therefore, defined for use in this study:
Critical thinking: for the purpose of this study, the definition of critical thinking is operationally
derived from the Delphi Report on Critical Thinking(American Philosophical Association, 1990).
Critical thinking is operationalized as a process of purposeful judgment based on reasoned
consideration to evidence in the context of making a decision about a problem, goal, or desired
outcome. Further, this process of purposeful judgment is subjected to on-going selfappraisal(Tiwari, 1998).
Argument: an argument is a reason or reasons offered for or against a proposal or proposition.
This term refers to a discussion in which there is a disagreement and suggests the use of reasoning
and evidence to support or refute a point. In the critical sense, argument is conducted in a spirit of
good will, openness to alternative perspectives, and truth-seeking(Paul, 1993). Students will be
asked at varying times to generate, support, and evaluate arguments.
Critical thinking: the consensus definition developed 46 experts from various disciplines who
participated in a research project resulting in Critical thinking: a statement of expert consensus for
purposes of educational assessment and instruction. Research findings and recommendations
(Facione, 1990) was accepted for use in this study. This report is often referred to simply as the
“Delphi Report.” The Delphi experts defined critical thinking as “purposeful, self-regulatory
judgment which results in interpretation, analysis, evaluation, and inference, as well as
explanation of the evidential, conceptual, methodological, criteriological, or contextual
considerations upon which that judgment is based.” Critical thinking is a complex of skills and
dispositions.
Critical thinking dispositions: the potential, natural tendencies, or personal inclinations to
demonstrate critical thinking skills. Richard Paul‟s model (Foundation for Critical Thinking,
1996), which was used as the treatment in this study, includes the following traits of a critical
thinker: independent thinking, intellectual empathy, intellectual humility, intellectual courage,
14
intellectual integrity, intellectual perseverance, intellectual curiosity, intellectual civility,
intellectual responsibility, and faith in reason. The seven critical thinking dispositions tested on
the California Critical thinking Dispositions Inventory(CCTDI), one of the instruments that was
used in this study, are truth-seeking, open-mindedness, analyticity, systematicity, self-confidence,
inquisitiveness, and cognitive maturity(Facione & Facione, 1992). Considerable overlap exists in
these two lists despite the defference in terminology. The CCTDI, however, makes no claim to
test for all critical thinking dispositions.
Critical thinking standards: Paul, whose model for critical thinking was used in this study,
insists that there are universal standards or criteria for critical thinking by which all attempts to
think critically should be measured. These include Clarity, Accuracy, Precision, Relevance,
Consistency, Depth, and Breadth.
Explicit instruction, unlike the implicit model, often involves „some sort of rule being thought
about during the learning process‟ (Dekeyser, 1995). In other words, learners are encouraged to
develop metalinguistic awareness of the rule, that can be achieved deductively (i.e., by providing
the learners with a grammatical description of the rule) or inductively (i.e., by assisting learners
to discover the rule by themselves from the data provided). Explicit instruction, therefore,
necessarily constitute direct intervention (Ellis, 2005). Instruction as direct intervention according
to Ellis involves the pre-emptive specification of what it is that the learners are supposed to learn
and, typically, draws on structural syllabus (p. 173).
1.6. Scope of the Study
This study was delimited to first year undergraduate academic writing students of Addis Ababa
universities. The reason for delimiting this study to first year undergraduate students was mainly
because these students are made to study the course „„writing for academic purposes„‟ as soon as
they join university with the assumption that it will help them to accomplish their academic
writing tasks in their major area courses and to minimize writing problems that the subjects may
face at their university years (2nd, 3rd, and so forth) ahead. And in many ways this stage of college
is the beginning of a whole new world in that not only are students expected to do more in their
courses/writings, but they also are to work at higher level (i.e., to write more analytically, to think
15
more conceptually, and to read more critically than ever before). In this context, writing has also
been used as a strategy to improve conceptual learning and may provide opportunity for students
to think through arguments and use higher-order thinking skills to respond to complex problems
(Marzano, 1991).
1.7. Limitations of the Study
Limitations to this study exist as well. Some were associated with the area of sample sampling;
the sample in this study was selected by purpose from one university which may not represented
the most students in some other universities in Ethiopia; however, if considering that the students
in this university can possibly come from everywhere in Ethiopia, the representation of the
population may be seen better. Another potential limitation to this study was the knowledge and
skill of this researcher in comprehensively finding all possible research on this topic. While every
effort has been made to explore this topic as thoroughly as possible, it is probable that the
researcher was unable to examine every single bit of research on the topic. The different kinds of
validity normally applied in research are all important. Which one should we select to use in our
thesis, however, depends on the characteristics of the thesis. For example, the thesis which has
strong involvement with theory needs construct validity rather than others, whereas the thesis
which has strong involvement with people may need content validity rather than others (Tiwari,
1998). I placed emphasis on face validity (a component of content validity), and construct validity
in my study because of the context. I wanted to ensure that the study was thought to be suitable by
our experts, attitudes of people (trainees), and based on the verification of the stated theory.
However, this is clearly a limitation because there is no guarantee that other experts would
necessarily judge the test in the same way, and that my tests (questions) fully represent the
domain of attitudes toward the training program.
16
CHAPTER II
Review of Related Literature
2.0 Introduction
This particular study embraced several important areas of educational inquiry. Many citations and
research reports have been reviewed in this document. This chapter reviews three main areas of
literature that are relevant to this study. The first section presents critical thinking concepts and all
aspects of critical thinking. Literature related to critical thinking assessments and approaches
comprises the second section of this chapter. The final section examines more about academic
writing skills and its relationship to critical thinking.
2.1. Critical Thinking
2.1.1. Defining Critical Thinking
The concept of critical thinking can be expressed in a variety of definitions, depending on one‟s
purpose (though, as with every concept, its essence is always the same)(Paul & Elder, 2007).
According to Reed (1998), a review of literature in the field of critical thinking revealed a general
lack of consensus on how critical thinking is best defined, on what critical thinking skills can and
should be taught, and on determining the most appropriate framework for this teaching. As a
whole, educational reformers have not even agreed on the terminology. While some scholars use
„critical thinking‟ and „higher order thinking‟ interchangeably (for example, Halpern, 1993),
others make a sharp distinction (Facione, 1990). The relationship among “critical thinking,”
“higher order thinking,” “thinking skills,” and other terms such as “informal logic,” “informal
reasoning,” “problem-solving,” “argumentation,” “critical reflection,” “reflective thinking,”
“reflective judgment,” and “metacognition” have further complicated the issue. Other areas of
disagreement and concern also include (a) the extent to which critical thinking is subject specific,
(b) differences between expert and novice thinking in a discipline and the extent to which novices
can learn to think more like experts, (c) difficulties in separating higher order and lower order
thinking skills for instructional purposes, and (d) whether critical thinking should be considered a
process or a set of skills (Beyer,1985; Facione, 1984; R.H.Johnson, 1996; Perkins, Farady, and
Bushey, 1991; Resnick, 1987). While a number of scholars have attempted to impose order on
this “conceptual swamp” (Cuban, 1984, p.686), no one has yet come up with a definition or
17
theory that is accepted as definitive (Beyer, 1985; Ennis, 1987; Facione, 1990; Lewis and Smith,
1993; Marzano et al, 1988; Quellmalz, 1987).
One of the major stumbling blocks to this consensus, according to Reed (1998), has rested in the
grounding of various theories and models in two distinct disciplines: philosophy and psychology.
Philosophers have tended to focus on the nature and quality of the products of critical thinking,
for example, analysis of arguments. Psychologists, on the other hand, have concentrated on the
process of cognition, the components and operations used to address academic and practical
problems. Further, cognitive and developmental psychology have been based in empirical
research, while philosophy has relied on logical reasoning to reach conclusions. While most
theorists have continued to base their theories and definitions of critical thinking or higher order
reasoning in one discipline or the other, some educators have noted the importance of drawing on
both philosophy and psychology to develop a rigorous and encompassing theory of critical
thinking and how to teach it (Kuhn, 1992; Kurfiss, 1988; Marzano et al., 1988; Quellmalz, 1987;
Weinstein, 1995).
Even though the literature on critical thinking has roots in two primary academic disciplines:
philosophy and psychology (Lewis & Smith, 1993). Sternberg (1986) has also noted a third
critical thinking strand within the field of education. These separate academic strands have
developed different approaches to defining critical thinking that reflect their respective concerns
(Lai, 2005). Thus, each of these three approaches is explored more fully and the definition(s)
most useful for the purpose of this study has been adopted.
2.1.1.1 The Philosophical Approach
As was mentioned in chapter one of this study, critical thinking has been associated with
philosophy since the time of Socrates. Its centrality in the current educational reform movement
has been closely connected with the use of informal logic as a separate specialization within the
discipline of philosophy since the early 1970s. Informal logic is a branch of logic that concerns
itself with interpretation, evaluation, and construction of arguments and argumentation used in
natural language; informal logicians have tended to view critical thinking as a broader term that
includes and draws upon the findings of informal logic but also benefits from other forms of logic
as well as from competencies outside of the field (R.H. Johnson, 1996). Informal logic has
18
contributed a rigorous theoretical foundation for critical thinking but one that is somewhat
narrowly focused on reasoning and argumentation.
The writings of Socrates, Plato, Aristotile, and more recently, Matthew Lipman and Richard Paul,
exemplify the philosophical approach. This approach focuses on the hypothetical critical thinker,
enumerating the qualities and characteristics of this person rather than the behaviors or actions the
critical thinker can perform (Lewis and Smith, 1993; Thayer-Bacon, 2000). Sterberg (1986)has
noted that this school of thought approaches the critical thinker as an ideal type, focusing on what
people are capable of doing under the best of circumstances. Accordingly, Richard Paul (1992)
discusses critical thinking in the context of “perfections of thought” (p.9). This preoccupation
with the ideal critical thinker is evident in the American Philosophical Association‟s consensus
portrait of the ideal critical thinker as someone who is inquisitive in nature, open-minded,
flexible, fair-minded, has a desire to be well-informed, understands diverse viewpoints, and is
willing to both suspend judgment and to consider other perspectives (Facione, 1990).
Those working within the philosophical tradition also emphasize qualities or standards of thought.
For example, Bailin (2002) define critical thinking as thinking of a particular quality - essentially
good thinking that meets specified criteria or standards of adequacy and accuracy. Further, the
philosophical approach has traditionally focused on the application of formal rules of logic (Lewis
and Smith, 1993; Sternberg, 1986). One limitation of this approach to defining critical thinking is
that it does not always correspond to reality (Sternberg, 1986). By emphasizing the ideal critical
thinker and what people have the capacity to do, this approach may have less to contribute to
discussions about how people actually think.
Definitions of critical thinking emerging from the philosophical tradition include that Critical
Thinking is:
•
“the propensity and skill to engage in an activity with reflective skepticism”
(McPeck, 1981, p. 8);
•
“reflective and reasonable thinking that is focused on deciding what to believe or
do” (Ennis, 1985, p. 45);
19
•
“skillful, responsible thinking that facilitates good judgment because it 1) relies
upon criteria, 2) is self-correcting, and 3) is sensitive to context” (Lipman, 1988,
p.39),
•
“purposeful, self-regulatory judgment which results in interpretation, analysis,
evaluation and inference, as well as explanation of the evidential, conceptual,
methodological, criteriological, or conceptual considerations upon which that
judgment is based” (Facione, 1990, p.3);
•
“disciplined, self-directed thinking that exemplifies the perfections of thinking
appropriate to a particular mode or domain of thought” (Paul, 1992, p. 9);
•
thinking that is goal-directed and purposive, “thinking aimed at forming a
judgment,” where the thinking
itself meets the standards of adequacy and
accuracy (Bailin et al, 1999b, p. 287); and
•
“judging in a reflective way what to do or what to believe” (Facione, 2000, p. 61).
Richard Paul (1993), a philosopher whose work has been widely cited by scholars using both
philosophical and cognitive approaches to critical thinking has insisted that critical thinking can
be defined in a number of different ways that should not be seen as mutually exclusive. Among
his various definitions of critical thinking are “thinking about your thinking while you are
thinking to make your thinking better” (p.91), and a ”unique kind of purposeful thinking in which
the thinker systematically and habitually imposes criteria and intellectual standards upon the
thinking, taking charge of the construction of thinking, guiding the construction of the thinking
according to the standard assessing the effectiveness of the thinking according to the purpose, the
criteria, and the standards” (p.21).
Like many other philosophers, Paul has argued that critical thinking requires an integration of
cognitive and affective domains. Content in any discipline should be viewed and taught as a mode
of thinking (i.e., history as historical thinking, biology as biological thinking, etc), and his model
for critical thinking about a domain or a problem includes cognitive elements of reasoning,
20
normative standards, and affective dispositions (see Figure 1, Foundation for Critical Thinking,
1996). It consists of reasoning about a field of study, issue, document, problem, etc. according to
eight “elements”: purpose, question, information, concepts, assumptions, points of view,
inferences, and implication. Further, Paul contends that the thinker must be guided by universal
intellectual standards (e.g., clarity, precision, accuracy, relevance) regardless of the domain or
issues under consideration. Appropriate dispositions or intellectual virtues (e.g., empathy,
humility, integrity, perseverance, fairness) that aid in overcoming the biases and unfounded
assumptions people bring to a problem (Paul, 1993).
Paul‟s model also advocates teaching students to assess their own thinking, whether expressed in
reading, writing, listening, or speaking, for someone incapable of assessing his own thinking
cannot be considered a critical thinker. Socratic discussions provide an important component in
encouraging students to examine their own background logic, allowing the intellectual give and
take, and supporting interdisciplinary thinking.
Resnick (1987) has summarized the nature of philosophical contribution to thinking skills as
promoting disciplined thinking, a means of guarding humans against their natural tendencies
toward ego- or ethnocentric thinking, toward accepting fallacies, and toward drawing
inappropriate conclusions because it is less troublesome than the work involved in thinking
through alternatives.
2.1.1.2 The Cognitive Psychological Approach
The cognitive psychological approach contrasts with the philosophical perspective in two ways.
First, cognitive psychologists, particularly those immersed in the behaviorist tradition and the
experimental research paradigm, tend to focus on how people actually think versus how they
could or should think under ideal conditions (Sternberg, 1986). Second, rather than defining
critical thinking by pointing to characteristics of the ideal critical thinker or enumerating criteria
or standards of “good” thought, those working in cognitive psychology tend to define critical
thinking by the types of actions or behaviors critical thinkers can do. Typically, this approach to
defining critical thinking includes a list of skills or procedures performed by critical thinkers
(Lewis & Smith, 1993).
In contrast to philosophers, Reed (1998) argued, psychologists have also drawn their ideas about
21
critical thinking largely from research in cognitive and developmental psychology and theories of
intelligence (Bransford, Sherwood, and Sturdevant, 1987; Halpern, 1996; Sternberg, 1987).
Cognitive and developmental psychologists have been more likely to connect critical thinking
with problem solving than philosophers have been, considering critical thinking and problem
solving as equivalent terms or one as a subset of the other. Halpern (1996), for example, has
defined critical thinking as “thinking that is purposeful, reasoned, and goal directed. It is the kind
of thinking involved in solving problems, formulating inferences, calculating likelihoods, and
making decisions”(p.5). While Halpern does use the term „critical thinking‟, most cognitive-based
theorists have preferred to use “thinking skills” (or, more narrowly, higher order thinking skills)
rather than critical thinking as a generic term for the movement (Lewis and Smith, 1993;
Sternberg, 1987). In general, psychologists have researched and emphasized skills involved in
thinking critically, often ignoring dispositions (inclinations, sensitivities, and values needed to be
a good critical thinker) and standards (criteria for evaluating thinking). In spite of that general
tendency, in recent years several noted psychologists have begun focusing on the importance of
students‟ dispositions and have emphasized them in their models for critical thinking (Halpern,
1998; Perkins, Jay, and Tishman, 1993).
Definitions of critical thinking that have emerged from the cognitive psychological approach
include:
•
“the mental processes, strategies, and representations people use to solve problems,
make decisions, and learn new concepts” (Sternberg, 1986, p. 3);
•
“the use of those cognitive skills or strategies that increase the probability of a
desirable outcome” (Halpern, 1998, p. 450); and
•
“seeing both sides of an issue, being open to new evidence that disconfirms your ideas,
reasoning dispassionately, demanding that claims be backed by evidence, deducing
and inferring conclusions from available facts, solving problems, and so forth”
(Willingham, 2007, p. 8).
22
2.1.1.3 The Educational Approach
Finally, those working in the field of education have also participated in discussions about critical
thinking. Benjamin Bloom and his associates are included in this category. Their taxonomy for
information processing skills (1956) is one of the most widely cited sources for educational
practitioners when it comes to teaching and assessing higher-order thinking skills. Bloom‟s
Taxonomy is hierarchical, with “comprehension” at the bottom and “evaluation” at the top. The
three highest levels (analysis, synthesis, and evaluation) are frequently said to represent critical
thinking (Kennedy et al., 1991).
The benefit of the educational approach is that it is based on years of classroom experience and
observations of student learning, unlike both the philosophical and the psychological traditions
(Sternberg, 1986). However, some have noted that the educational approach is limited in its
vagueness. Concepts within the taxonomy lack the clarity necessary to guide instruction and
assessment in a useful way (Ennis, 1985; Sternberg, 1986). Furthermore, the frameworks
developed in education have not been tested as vigorously as those developed within either
philosophy or psychology (Sternberg, 1986).
2.1.1.4 Attempts at Consensus: the APA Delphi Study Definition
The review of related literature indicates that lists of skills and dispositions drawn up by various
philosophers and psychologists above have reflected considerable overlap (cf. Ennis, 1987;
Facione, 1990; Halpern, 1998; B.E. Johnson, 1994; Perkins, Jay and Tishman, 1993; Quellmalz,
1987), and several recent attempts to synthesize contributions of psychology and philosophy to
critical thinking have appeared in the published literature (Facione, 1984; Lewis and Smith, 1993;
B. E. Johnson, 1994). Paul (1993), for example, has called for integrating insights of philosophers
and psychologists, and other theorists and researchers in a comprehensive theory of critical
thinking. He and his colleague Linda Elder, an educational psychologist, have recently introduced
a stage theory of critical thinking development that draws on both developmental psychology and
philosophical approaches to critical thinking (Paul and Elder, 1997).
Probably the best known broad-based systematic inquiry into the state of critical thinking was set
in motion by the American Philosophical Association in an attempt to achieve a consensus of
23
opinions by a panel of experts in critical thinking for the purposes of educational instruction and
assessment (Facione, 1990). Thus, forty-six experts, drawn from various disciplines, participated
in the multi-year qualitative research project. About half (52%) of the participants were
philosophers, and the rest were affiliated with education (20%), the social sciences including
psychology (20%), and the physical sciences (6%). The report resulting from this investigation is
commonly known in the critical thinking literature as the Delphi Report.
The Delphi Report, according to Facione (1990:13), identified critical thinking as “one among a
family of closely related forms of higher-order thinking along with, for example, problem solving,
decision making, and creative thinking”. Facione, the organizing participant, has pointed out that
these terms overlap conceptually and complexly, and the relationships among them have yet to be
satisfactorily examined. The experts‟ consensus statement that includes a range of definitions of
critical thinking, its importance, and its contested nature should be written as follows:
“We understand critical thinking to be purposeful, self-regulatory judgment which
results in interpretation, analysis, evaluation, and inference, as well as explanation of
the
evidential,
conceptual,
methodological,
criteriological,
or
contextual
considerations upon which that judgment is based. Critical thinking is essential as a
tool of inquiry. As such, critical thinking is a liberating force in education and a
powerful resource in one’s personal and civic life. While not synonymous with good
thinking, critical thinking is a pervasive and self-rectifying human phenomenon. The
ideal critical thinker is habitually inquisitive, well-informed, trustful of reason, openminded, flexible, fair-minded in evaluation, honest in facing personal biases, prudent
in making judgments, willing to reconsider, clear about issues, orderly in complex
matters, diligent in seeking relevant information, reasonable in the selection of
criteria, focused in inquiry, and persistent in seeking results which are as precise as
the subject and the circumstances of inquiry permit. Thus, educating good critical
thinkers means working toward this ideal. It combines developing critical thinking
skills with nurturing those dispositions which consistently yield useful insights and
which are the basis of a rational and democratic society” (Facione, 1990, p. 14).
Moreover, this statement, according to Facione(1990), includes skills in both cognitive and
affective domains. Core cognitive skills (not including sub-skills) are interpretation, analysis,
24
evaluation, inference, explanation, and self-regulation. Affective dispositions are included in the
statement above and are discussed extensively in the report. Thus, the Delphi experts were able to
reach consensus on a broadly inclusive definition of critical thinking that included both cognitive
skills and affective dispositions, but they remained deeply divided on the issues of whether or not
critical thinking is a normative dimension, as Paul has insisted in his analysis (Paul, 1993).
More recent statements given by Michael Scriven and Richard Paul (2004) (National Council for
Excellence in Critical Thinking), an organization promoting critical thinking in the US defined
critical thinking as the:
“intellectually disciplined process of actively and skillfully conceptualizing, applying,
synthesizing, and/or evaluating information gathered from, or generated by,
observation, experience, reflection, or communication, as a guide to belief and action.
In its exemplary form, it is based on universal intellectual values that transcend
subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound
evidence, good reasons, depth, breadth, and fairness. It entails the examination of
those structures or elements of thought implicit in all reasoning: purpose, problem, or
question-at-issue, assumptions, concepts, empirical grounding; reasoning leading to
conclusions, implications and consequences, objections, from alternative viewpoints,
and frame of reference”.
Like the Delphi experts, many other scholars have also viewed higher order thinking as an
umbrella term that includes critical thinking, problem solving, and decision making. While related
to and sharing overlapping skills with problem solving, critical thinking focuses on reasoning,
argumentation, and judgment about ill-structured problems. Critical thinking includes skills of
interpretation, analysis, evaluation, inference, explanation, and self-regulation. It also includes
affective dispositions (Facione and American Philosophical Association, 1990).
2.1.1.5 Definitions of Critical Thinking Adopted for This Study
Critical thinking as conceptualized in the Delphi Report (the Delphi Consensus) on critical
thinking (American Philosophical Association, 1990) with the addition of the intellectual
standards recognized by Paul (Foundation for Critical Thinking, 1996), and Paul and Scriven
25
(2004) has been chosen for this study. The adoption of the Delphi definition is for a number of
reasons:
First and foremost, the Delphi definition, which is referred to as the Delphi Report, was
accomplished by a group of 46 leading theorists, philosophers, psychologists, educators, and
critical thinking assessment specialists from a variety of academic and business disciplines for the
purposes of educational instruction and assessment(Facione and American Philosophical
Association, 1990). The Delphi panel experts consensus describes critical thinking as a “process
of purposeful, self-regulatory judgment that drives problem-solving and decision making”. This
definition, according to Facione(1990), implies that critical thinking is an intentional selfregulated process that provides a mechanism for solving problems and making decisions based on
reasoning and logic, which is particularly useful when dealing with issues of national and global
significance.
Another reason for adopting the Delphi definition for this study is that it is broad enough to
encompass the key characteristics considered to be the essential features of critical thinking,
namely, analysis(the ability to break a concept or idea into component pieces in order to
understand its structure and inherent relationships), inference(the skills used to arrive at a
conclusion by reconciling what is known with what is unknown, and evaluation(the ability to
weigh and consider evidence and make reasoned judgments within a given context), and other
critical thinking skills such as interpretation, explanation, synthesis, self-regulation, problemsolving, decision making, reflective skepticism, deductive/inductive reasoning, dialectical
thinking(Facione and American Philosophical Association, 1990).
A third reason for adopting the Delphi definition is that unlike many other definitions of critical
thinking, it can be tested (measured) directly using the instruments developed from the Delphi
Report. For example, commercially available general knowledge standardized tests; researcher or
instructor designed assessments that attempt to capture aspects of critical thinking more directly
related to the purposes of the research project or subject of instruction; and teaching students to
assess their own thinking.
26
Finally, in addition to the Delphi definition, the definition by Paul (Foundation for Critical
Thinking, 1996), and Paul and Scriven (2004) has been adopted for use in this study for the
reason that the thinker must be guided by Universal Intellectual Standards (e.g., clarity, precision,
accuracy, relevance, depth, breadth, logic) by which the learner uses to evaluate or assess his/her
thinking (see the definition by Scriven and Paul, 2004 above).
2.2. Can Critical Thinking be Taught?
For those who reckon that critical thinking cannot be taught, Young (1980) suggests that they
may have a misconception that no attempts to foster critical thinking can compensate for the
negative effects of in-born ability and earlier schooling. They may also believe that thinking is
simply the results of mastery of content and cannot be improved through learning how to think.
On the contrary, there are those who believe that critical thinking can be taught. The Informal
Logic and Critical Thinking Movement is such an example (Tiwari, 1998). With an objective to
improve reasoning and critical thinking skills through direct teaching, the Movement is
committed to the vision that critical thinking is teachable (Fisher, 1991). Recent research and
instructional development also suggest that it is possible to teach students to think critically
through purposeful curricular designs and specific teaching-learning strategies (Ennis, 1996; Paul,
Binker, Jensen & Kreklau, 1990; Halpern, 1989; Brookfields, 1987; Meyers, 1986; Arons, 1985).
There are reports of major gain in critical thinking abilities for those undertaking such
programmes as the ADAPT program at the university of Nebraska, and project SOAR at Xavier
University of Louisiana (Meyers, 1986). On the other hand, there are doubts about the
effectiveness education in fostering students‟ critical thinking (McMillan, 1987; Norris, 1985). In
other studies, however, a significant increase in critical thinking scores between the entry and
completion points of the programme was reported (Gross, Takazawa, & Rose, 1987; Berger,
1984).
Notwithstanding the studies cited above, research evidence of the impact of education on the
development of critical thinking is sparse as noted by Sander (1992), and Miller and Malcolm
(1990). Even in the studies where an apparent increase in critical thinking was reported, no details
27
were given as to what might have contributed to the improvement. Assuming that critical thinking
can be taught, there still remains the question: if so, how?
More recently, however, many critical thinking researchers maintain that critical thinking skills
and abilities can be taught. Halpern (1998) offers evidence of two instructional programs aimed at
improving the critical thinking skills and abilities of college students. In one study, students who
were taught general problem-solving skills improved on Piagetian-inspired measures of cognitive
development. In the other study, college students instructed in a specific type of problem-solving
strategy produced mental math representations that were more like those of experts than novices.
In their review of the literature, Kennedy et al. (1991) concluded that instructional interventions
aimed at improving student‟s critical thinking skills have generally shown positive results. In a
meta-analysis of 117 empirical studies examining the impact of instructional interventions on
students‟ critical thinking skills and dispositions, Abrami et al. (2008) found that these
interventions, in general, have a positive impact, with a mean effect size of 0.34. However, the
distribution of effect sizes was highly homogeneous, with effect sizes varying dramatically by
type of intervention and sample characteristics. For example, effect sizes for students in K-12
settings were higher than those observed among undergraduates.
2.3 The Importance of Learning to Think Critically
At all ages of life, critical thinking skills and habits of mind(disposition) are needed by each of us
when solving problems and making decisions that affect ourselves, our families, our country and
our world. Learning demands strength in critical thinking because learning requires the
interpretation and integration of new knowledge and its practical and appropriate application
when encountering novel situations, problem conditions and innovative opportunities.
www.insightassessment.com
In Robert Wood Johnson Foundation‟s July 2009 Jobs to Careers, Randall Wilson wrote: “Early
assessment of critical thinking maximizes workforce efficiency and increases the potential for
learning and educational effectiveness at all levels.” The truth of this claim is even more apparent.
Today. World culture and an information-intensive everyday life invite us to apply critical
thinking to interpret, analyze, evaluate, explain, and draw warranted inferences about what to
believe and what to do in a stream of novel and too often time-limited or high-stakes, uncertain
situations. Studies have consistently shown that strength in critical thinking correlates with
28
workplace and academic success, certification and licensure in the most valued professions, and
survival of some of life‟s most difficult challenges.
According to Dewey (1956), the key to seeing the significance of critical thinking in academics is
in understanding the significance of critical thinking in learning. There are two meanings to the
learning of this content. The first occurs when learners (for the first time) construct in their minds
the basic ideas, principles, and theories that are inherent in content. This is a process of
internalization. The second occurs when learners effectively use those ideas, principles, and
theories as they become relevant in learners‟ lives. This is a process of application. Good teachers
cultivate critical thinking (intellectually engaged thinking) at every stage of learning, including
initial learning. The key is that the teacher who fosters critical thinking fosters reflectiveness in
students by asking questions that stimulate thinking essential to the construction of knowledge.
As emphasized above by Dewey, each discipline adapts its use of critical thinking concepts and
principles (principles like in school). The core concepts are always there, but they are embedded
in subject – specific content. For students to learn content, intellectual engagement is crucial.
Good teachers recognize this and therefore focus on the questions, readings, activities that
stimulate the mind to take ownership of key concepts and principles underlying the subject.
According to the study by Nummela and Rosengren (1986), when the brain‟s natural tendency to
construct meaning from patterns is exploited in teaching, learning in the classroom becomes more
like learning in real life. Because the brain creates patterns, the task for teachers is to organize and
present materials in a way that allows the brain to create meaningful and relevant connections to
extract the patterns. This type of learning is most easily recognized in the whole and these
approaches seek to connect meanings through the development of problem solving and critical
thinking skills (p.5).
Critical thinking is important because it uses parts of the brain that we rarely use. Critical thinking
makes us more alert and helps us to solve problem1. By combining imagination and intuition with
reasoning and evaluation, learners achieve perspective, construct and discern relationships, and
improve understanding. They gain confidence in working with data, using the latest information
1
http://answers.ask.com/society/philosophy/why-is-critical-thinking
29
sources, analyzing arguments, and solving complex problems. This is true if and only if critical
thinking is an integral part of student‟s classroom activity (Powell and Tassoni, 2009). They also
explain that thinking critically and understanding contexts for knowledge in an engaging learning
situation lead to reflection and informed action. Making thoughtful decisions and examining their
consequences enhance personal moral commitment, enrich ethical understanding, and strengthen
civic participation (p.2-3).
Maimon, Peritz, and Yancey (2008) explain that critical thinking is fundamental not only to
college work but also to life in a democratic society. Thinking critically means getting involved,
not necessarily finding fault. Critical thinkers never simply gather information and present it
without question. They inquire about what they see, hear and read, and they involve such skills as:
Recognizing an argument; analyzing and evaluating an argument; and recognizing common
logical fallacies (p.28).
There are many positive and useful uses of critical thinking, for example, formulating a workable
solution to a complex personal problem(s), deliberating as a group about what course of action to
take, or analyzing the assumptions and the quality of the methods used in scientifically arriving at
a reasonable level of confidence about a given hypothesis (Sumner, 1906). According to Sumner
(1906), using strong critical thinking, we might evaluate an argument, for example, as worthy of
acceptance because it is valid and based on a true premise. Up on reflection, a speaker may be
evaluated as a credible source of knowledge on a given topic.
Critical thinking, it is argued can also occur whenever one judges, decides, or solves a problem; in
general, whenever one must figure out what to believe or what to do, and do so in a reasonable
and reflective way. Reading, writing, speaking, and listening can all be done critically or
uncritically. Critical thinking is crucial to becoming a close reader and substantive writer.
Expressed most generally, critical thinking is a way of taking up the problems of life (Norris and
Ennis, 1989; Sumner, 1906).
Thinking, as Raghunathan (2001) puts it, is the highest mental activity present in man. All human
achievements and progress are simply the products of human thought. The evolution of culture,
art, literature, science and technology are all the results of our thinking. According to
30
Raghunathan, thought and action are inseparable – they are actually the two sides of the same
coin. All our deliberate action starts from our deliberate thinking. For a man to do something, he
should first see it in his mind‟s eye – he should imagine it, think about it first, before he can do it.
All creations – whether artistic, literal or scientific … first occur in the creator‟s mind before it is
actually given life in the real world (p.1). Thus, thinking is a tool for adapting ourselves to the
physical and social environment.
Raghunathan (2001, p.2) also argues that the benefits of developing thinking ability are manifold
(many). By developing one‟s thinking skills one can make achievements; can become successful;
can shine in social life; can attain emotional, social and economic maturity and so on. By
developing one‟s thinking abilities it is possible to transform one‟s aggressive tendencies, bad
temper and other negative tendencies creatively and constructively. He reports that it has been
found by Dr. Edward de Bono that when school students were taught to think effectively, their illtemper and aggressive tendencies reduced significantly. Clinical psychologists have also found
that those who have neuroses are poor thinkers compared to normals. Neurotics scored
significantly lower scores in decision making, problem solving and creative thinking.
Interestingly, when neurotics were taught to think effectively, they showed a remarkable
reduction in their neurosis.
It is also believed that critical thinking skills directly correlates with fluid intelligence that enable
a person to determine patterns, make connections and solve new problems. When you improve
your critical thinking skills, you also improve your fluid intelligence which also helps increase
your problem solving skills and deep thinking abilities /elements. All of these skills relate to one
part of the brain, and the more you use them, the easier it will be to put your skill to the test
(Lerner, 1990; Willis and Nesselroade, 1990). Lau and Chan (2004/2012) have further identified
why we study critical thinking as follows:
1. Critical thinking is a domain-general thinking skill. The ability to think clearly and
rationally is important whatever we choose to do. If you work in education, research,
finance, management, or the legal profession, then critical thinking is obviously
important. But critical thinking skills are not restricted to a particular subject area. Being
able to think well and solve problems systematically is an asset for any career.
31
2. Critical thinking is very important in the new knowledge economy. The global
knowledge economy is driven by information and technology. One has to be able to deal
with changes quickly and effectively. The new economy places increasing demands on
flexible intellectual skills, and the ability to analyze information and integrate diverse
sources of knowledge in solving problems. Good critical thinking promotes such thinking
skills, and is very important in the fast-changing workplace.
3. Critical thinking enhances language and presentation skills. Thinking clearly and
systematically can improve the way we express our ideas. In learning how to analyze the
logical structure of texts, critical thinking also improves comprehension abilities.
4. Critical thinking promotes creativity. To come up with a creative solution to a problem
involves not just having new ideas. It must also be the case that the new ideas being
generated are useful and relevant to the task at hand. Critical thinking plays a crucial role
in evaluating new ideas, selecting the best ones and modifying them if necessary.
5. Critical thinking is crucial for self-reflection. In order to live a meaningful life and to
structure our lives accordingly, we need to justify and reflect on our values and decisions.
Critical thinking provides the tools for this process of self-evaluation.
Although most people would agree that critical thinking is an important thinking skill, most
people also do not know how to improve their thinking. This is because according to Lau and
Chan (2004), critical thinking is a meta-thinking skill. It requires careful reflection on the good
principles of reasoning and making a conscious effort to internalize them and apply them in daily
life. This is notoriously hard to do and often requires a long period of training.
2.4. Theoretical Perspective Underpinning the Teaching of Critical
Thinking
A constructivist approach to learning has been advocated as a key theoretical perspective
underpinning the teaching and learning of critical thinking in the present study. Constructivism is
a philosophy of education characterized by student ownership of the learning process. Learning to
thinking critically is best implemented through constructivism (Leach, 2011). Brooks and Brooks
(1993) viewed constructivism as a philosophy that informs critical thinking. Constructivist
learning theory sees knowledge as constructed from the perceptions, experiences, and mental
representations of the learner. Meaning is created by the individual and is dependent on the
32
individual‟s previous and current knowledge structure (Wadsworth, 1971). Learning is a personal
experience built upon a scaffold of experience and changes as experience is acquired. Experience
enhances knowledge and deep understanding of content (Healy, 1990). Positive interaction and
personal relationships within the classroom create an environment conducive to higher order
thinking (Healy, 1990). Critical thinking requires students to be actively engaged with not only
the content presented but also with others who are also involved. Instead of acceptance of new
material at face value, critical thinking requires introspection, reflection, discussion, and
interaction.
In spite of the need to promote critical thinking skills in all realms of education, teaching methods
elicit responses on a lower level of Bloom‟s Taxonomy (Elder and Paul, 2009). Rote
memorization is common in most classrooms and is the primary mode of material acquisition.
This passive activity is on a lower level of learning acquisition according to Brookfield (2006).
Conversely, constructivist classrooms tend to be more simulating, challenging, engaging, and
interesting. Marzano (2007) stated that constructivist teachers are not passive bystanders. They
provide discussion, illumination, and challenge and serve as facilitators who encourage learners to
question knowledge. Teachers must allow students to put together or construct knowledge
themselves (Brooks and Brooks, 1993).
The constructivist teacher is not seen as one who imparts knowledge but rather as one who
orchestrates an environment that is conducive to individual ownership of knowledge on a personal
level. Constructivist teachers look not for what students can repeat verbatim but what they can
generate, demonstrate, exhibit, and construct (Brooks and Brooks, 1993).
Content knowledge should be taught through the integration of critical thinking, or as Jenkins
(2009) stated, the process should teach students to think. Engaging the brain through critical
thinking and problem solving is much more beneficial than memorization of isolated facts
(Matheny, 2009). As Jensen (2005) related, the mature brain is wired for problem solving and
higher order thinking.
The need to teach content is a significant impediment to the teaching of critical thinking skills.
Additional barriers to the implementation of critical thinking include the size of classrooms, the
33
amount of time in class, and teacher attitude (Slavin, 2009). The traditional educational
philosophy of the teacher serving as the deliverer of information and the student as a passive
receiver of knowledge acutely impedes the development of critical thinking skills (Marzano,
2007). This philosophy of teaching is best identified as essentialism. Essentialism has replaced
progressivism, the philosophy of education espoused by John Dewey in the early part of the 20th
century.
Progressivism is identified as a philosophy of education that promotes critical thinking. In the
progressivist classroom students are encouraged to interact with each other and develop social
virtues such as cooperation and tolerance for different points of view (Sadker and Sadker, 2003).
Teachers in a progressivist classroom integrate the content of different subjects and plan lessons
that arouse curiosity and higher levels of knowledge.
Essentialist teachers and administrators decide what is important for students to learn and place
little emphasis on student interest (Sadker and Sadker, 2003). Essentialist teachers focus heavily
on achievement test scores as a means of evaluating progress (Sadker & Sadker, 2003).
2.5. The Need for Explicit Instruction in Critical Thinking
Until very recently, it was generally assumed that students who attended college would develop
critical thinking skills by attending classes, by listening to lectures and participating in class
discussions, and by taking tests and completing regular course assignments. Several studies,
however, have indicated that improving students‟ thinking requires more explicit teaching of
critical thinking skills (Bangert-Drowns and Bankert, 1990; Halpern, 1998; Keeley, Browns, &
Kreutzer, 1982; Perkins, 1989; Quellmalz, 1989; Underbakke, Borg, and Peterson, 1993). Yet
research findings on the most effective instructional methods for improving students‟ critical
thinking abilities have been inconclusive. McMillan (1987) reviewed 27 studies that investigated
the effect of various courses and programs on critical thinking abilities among college students,
and he found that while results have failed to support the use of specific instructional course
conditions to enhance critical thinking, they did support the conclusion that college attendance
improves critical thinking. McMillan has cautioned against generalizing these findings to all
methods or courses, citing weak research designs, a lack of good instrumentation appropriate to
the interventions being evaluated, and lack of a common definition and theory of critical thinking.
34
Halpern (1993) has suggested that available assessment instruments may contribute to the
problem of determining the effect of various models for critical thinking, she has argued that
assessment instruments must be made more sensitive in order to measure subtle increases in
critical thinking skills and dispositions. Clearly, more research is needed to determine which
educational experiences yield the greatest gains in critical thinking.
According to Mendelman (2007), however, in order to better prepare our students for the
challenges they will face, teachers need to explicitly teach critical thinking strategies,
equipping young people with twenty-first century skills. Users of cognitive strategies appeared to
be confident, positive, highly aroused or energized, strongly motivated, and yet comfortable with
language learning (Ehrman and Oxford, 1995). They further explain why cognitive strategies
seem to be so important is that cognitive strategies include two activity types: practice and
rehearsal on the one hand, and mental techniques that involve hypothesis formation and
personalizing on the other. The latter set is involved with a kind of „deep processing‟ that forms
mental links between the new and old material and makes the new material a solid part of the
learner‟s own personal repertoire. Unlike the other strategy categories, which do not involve this
kind of intellectual depth, cognitive strategies go deep in to the learner.
There is also much evidence that confirm the need for explicit instruction in critical thinking.
Reed (1998), for example, in her findings asserted that if we want our students to think critically,
we must explicitly teach them to how to do so. Her study further explains that training in critical
thinking should be both direct and intense. Similarly, to improve as critical thinkers, students
must be taught the components of Paul‟s model explicitly and thoroughly, and they should be
provided with frequent practice in using the model. Paul‟s model, she added, needs to be deeply
integrated into course content, not just introduced or used a few times during a semester. Implicit
modeling of critical thinking combined with a few scattered lessons providing critical thinking
practice are not likely to be effective for most students (pp.161-62). The most essential
implication of this study may be the importance of recognizing the need for explicit and intense
training for critical thinking.
Empirical studies on the effects of instructional interventions on students‟ critical thinking skills
and dispositions, Abrami et al. (2008) found that educators should approach critical thinking
35
instruction both by integrating critical thinking into regular academic content and, by teaching
general critical thinking skills as a stand-alone component. This finding reinforces the importance
of providing explicit instruction in critical thinking rather than simply viewing critical thinking as
an implicit goal of a course. The authors also found that interventions in which educators received
special training in teaching critical thinking had the largest effect-sizes, compared to studies in
which course curricula were simply aligned to critical thinking standards or critical thinking was
simply included as an instructional objective. Thus, successful interventions may require
professional development for teachers specifically focused on teaching critical thinking (Abrami
et al., 2008).
If students were not exposed to, and do not master the ability to think insightfully and critically,
they will be unable to compete in a modern, global economy. According to Hove (2011), in order
to better prepare our students for the challenges they will face, universities need to explicitly
teach critical thinking strategies, equipping young people with twenty-first century skills. The
English classroom presents a natural setting to practice critical thinking, as it is customary for
English instructors to work with students on analyzing, synthesizing, and evaluating all types of
text for word choice, point of view, tone, and structure to develop the skills of critical thinking
―” that can have clear relevance to students„ lives” (Pescatore, 2007, pp. 336-337). According to
Pescatore, a rigorous English curriculum, focused on an explicit, scaffolded approach to teaching
critical thinking skills, will better prepare university students to easily adapt to a rapidly changing
world.
2.6. Approaches to Teaching Critical Thinking
A number of studies and scholars in strategy teaching and learning indicate that instruction in
thinking fits within the broader construct of learner strategy instruction (Brown and Palincsar,
1982; Baker and Brown, 1984; Wenden, 1991). According to Wenden (1997), there are three
approaches to instructing learners to use new strategies. One, a separate program apart from
language instruction, e.g. in a self-access center, can be set up. Two, learner strategy instruction
can take place in the language learning classroom but as a separate component. However,
Wenden suggested that the third approach (that this study has attempted to use in addition to the
second), the integration of strategy instruction into regular language instruction, may be the most
effective approach.
36
The debate about domain specificity has implications for critical thinking instruction. Ennis
(1989, 1992) identifies four instructional approaches that vary in terms of the extent to which
critical thinking skills are taught as a stand-alone course versus integrated into regular instruction.
These are: general, infusion, immersion, and mixed approaches.
2.6.1. The General Approach
The general approach, according to Ennis (1992), entails direct and explicit instruction in critical
thinking skills as a separate course, where critical thinking skills and abilities are emphasized
outside the context of specific subject matter. Typically, some content is involved to contextualize
examples and tasks. However, the content is not related to discipline-specific knowledge, but
tends to be drawn from problems that students are likely to encounter in their daily lives. Van
Gelder(2005) appears to advocate for the general approach to critical thinking instruction.
Drawing from the literature on expertise, Van Gelder argues that students need “deliberate
practice” in exercising critical thinking skills and abilities. This type of practice can only occur
when critical thinking is taught as a separate and explicit part of the curriculum. However,
students must be taught to transfer critical thinking to a variety of contexts by providing them
opportunities to practice applying critical thinking skills in diverse contexts. Halpern(2001, p.
278) argues that instruction in general thinking skills, taught as a “broad-based, crossdisciplinary” course, is the most effective way of teaching critical thinking. In this mode, critical
thinking may be taught in separate courses, separate instructional units, or as a separate thread in
an existing subject matter. For example, an informal logic course may be used as the means to
teach critical thinking abilities and dispositions (Ennis, 1992).
2.6.2. The Infusion Approach
Unlike the general approach, infusion of critical thinking instruction is rooted in subject matter.
Students are encouraged to think critically about the subject while developing a deep and
thoughtful understanding of the subject matter. The general principles of critical thinking are
made explicit to students during the instruction. The infusion approach entails in-depth instruction
in the subject matter plus explicit instruction on general critical thinking principles. This critical
thinking instruction is provided in the context of specific subject matter. Ennis (1989) indicates
37
that this approach is commonly seen in the “across the curriculum” movements. Proponents of the
infusion approach include Resnick(1987), Swartz(1987), and Glaser(1984).
2.6.3. The Immersion Approach
Somewhat related to the infusion approach is immersion. In immersion instruction, students are
engaged in deep subject-matter instruction. Although critical thinking skills and abilities are part
of the content to be learned, critical thinking instruction is not made explicit. In other words,
critical thinking skills and abilities are not the focus of direct and explicit instruction. Rather,
students are expected to acquire these skills as a natural consequence of engaging with the subject
matter (1989). Proponents of the infusion and immersion approaches appear to include both
Bailin et al. (1999), who vigorously defend the domain specificity of critical thinking, and
Lipman (1988), who views critical thinking skills as being somewhat general but who argues,
nonetheless, that instruction in critical thinking must go hand-in-hand with instruction in basic
skills, such as reading, writing, listening, and speaking. Silva (2008) echoes this viewpoint,
maintaining that knowledge and thinking have to be taught simultaneously. Likewise, Case
(2005) argues that critical thinking is a lens through which to teach the content and skills
embedded in the curriculum; and Pithers and Soden(2000) reject the view that critical thinking
could be taught as a separate subject. Rather, critical thinking should be viewed as a way of
teaching and learning in any domain.
2.6.4. The Mixed Approach
Finally, the mixed approach combines elements of both the general and subject-specific
approaches. Teachers pair stand-alone instruction in general critical thinking principles with
application of critical thinking skills in the context of specific subject matter. Explicit instruction
in critical thinking skills can be incorporated into both the general and the specific components
(Ennis, 1989). Facione (1990) appears to advocate for this approach when he notes that critical
thinking can be taught in the context of domain-specific content, or content drawn from “events in
everyday life”(p.10). Paul (1992) recommends basic critical thinking skills courses, as well as
including critical thinking within discipline-specific courses. Kennedy et al. (1991), after
reviewing extant research on the various approaches, conclude that the evidence does not support
the superiority of any particular approach. Accordingly, they recommend using the mixed
approach.
38
Subject matter knowledge is a necessary condition for thinking critically in the infusion and
immersion approaches but not required at all in the general approach. The involvement (or not) of
the subject matter is also the basis where by Fisher (1991) distinguishes the two different methods
in teaching critical thinking. One is to teach critical thinking using direct methods, that is,
methods designed specifically for the purpose of developing students‟ critical thinking. In this
method, learning to think critically is independent of any subject matter. The other way is to teach
critical thinking indirectly, that is, to develop students‟ critical thinking in the process of learning
a subject. The indirect method takes the view that all reasoning is subject-specific (McPeck,
1990), and the only way to learn to reason well is to master the subject matter. Thus, the content
of the subject determines the appropriateness of the reasoning.
As discussed earlier, McPeck(1981) refutes that critical thinking is a universal skill and argues
vehemently against teaching critical thinking in isolation from specific subjects. While some
scholars have also observed that being an expert thinker in one field is no guarantee that he/she
demonstrates the same degree of critical thinking in another field (Carter, 1993; Fisher, 1991;
Meyers, 1986), others have reacted quite strongly to McPeck‟s claim (Quinn, 1994; Ennis, 1992;
Fisher, 1991; Siegel, 1990; Blair, 1988; Furedy, 1985). The common belief shared by these
writers is that discipline-specific orientation to critical thinking as advocated by Mcpeck, is
theoretically implausible: an ad hominem fallacy is a fallacy in any field and there are general
principles of reasoning that would apply to many disciplines. Ennis (1992) further argues that
subject knowledge could even be counter-productive to critical thinking because an expert is so
well-informed about the subject that he/she may stop considering alternatives.
In their meta-analysis of 117 empirical studies on the effects of instructional interventions on
students‟ critical thinking skills and dispositions, Abrami et al. (2008) found that a substantial
amount of the variation in effect sizes across studies was driven by pedagogical grounding and by
type of intervention. In other words, when instructional approach was categorized as general,
immersion, infusion, or mixed, the mixed approach had the largest effect-sizes and the immersion
approach had the smallest. This finding suggests that educators should approach critical thinking
instruction both by integrating critical thinking into regular academic content and by teaching
general critical thinking skills as a stand-alone component. This finding reinforces the
39
importance of providing explicit instruction in critical thinking rather than simply viewing critical
thinking as an implicit goal of a course. The authors also found that interventions in which
educators received special training in teaching critical thinking had also the largest effect sizes,
compared to studies in which course curricula were simply aligned to critical thinking standards
or critical thinking was simply included as an instructional objective. Thus, successful
interventions may require professional development for teachers specifically focused on teaching
critical thinking (Abrami et al. 2008).
Studies on the teaching and development of critical thinking skills, for example, Warren,
Memory, and Bolinger (2004) argue that an “infusion” approach (i.e., developing critical thinking
skills in the context of specific content) is much better than an “isolation” approach(i.e.,
attempting to develop critical thinking skills in courses on critical thinking itself). However, their
descriptive study may provide a model for working with History and history students in other
contexts. This study by Warren et al (2004), however, may have a limited application to other
courses in other instructional contexts.
As the purpose of the present study, however, was to examine if explicit instruction in critical
thinking help students improve their general critical thinking abilities and academic EFL writing
skills. Thus, this study made use of the mixed approach of CT instruction both by teaching
general critical thinking skills and principles as a stand-alone component(explicitly or directly)
and then, by integrating critical thinking into regular academic writing skills course content. As
discussed earlier in this session, the Mixed approach of critical thinking instruction combines the
elements of both the general and subject-specific approach. This approach pair stand-alone
instruction in general critical thinking principles with application of critical thinking skills in the
context (or content) of specific-subject matter. In this approach, students are encouraged to think
critically about the general principles of critical thinking skills and explicitly use them while
developing a deep and thoughtful understanding of the subject matter(Academic EFL Writing
Skills, in this case).
40
2.7. Critical Thinking Teaching Techniques/Instructional Strategies
As reviewed by Lai (2011:33), a number of researchers have recommended using particular
strategies to encourage the development of critical thinking skills and abilities. These include
such teaching techniques as explicit instruction, collaborative or cooperative learning, modeling,
and constructivist techniques. For example, many researchers have noted that critical thinking
skills and abilities are unlikely to develop in the absence of explicit instruction (Abrami et al.,
2008; Case, 2005; Facione, 1990; Halpern, 1998; Paul, 1992). Facione points out that this explicit
instruction should also attend to the dispositional or affective component of critical thinking.
Another method recommended by several critical thinking researchers is a collaborative or
cooperative approach to instruction (Abrami et al., 2008; Bailin et al., 1991; Bonk and Smith,
1998; Heyman, 2008; Nelson, 1994; Paul, 1992; Thayer-Bacon, 2000). This recommendation
appears to be rooted in piagetian and Vygotskyian traditions that emphasize the value of social
interactions for promoting cognitive development (as summarized in Dillenbourg et al., 1996).
Piaget touted the instructional value of cognitive conflict for catalyzing growth, typically
achieved by interacting with another person at a higher developmental stage. Along similar lines,
Vygotsky identified the zone of proximal development as the distance between what an individual
can accomplish alone and what he/she can accomplish with the help of a more capable
other(either a peer or an adult). Each of these approaches highlights the potential for cognitive
improvement when students interact with one another (as summarized in Dillenbourg et al.,
1996).
Proponents of collaborative or cooperative learning include Thayer-Bacon (2000), who
emphasizes the importance of students‟ relationships with others in developing critical thinking
skills. Supporters also include Bailin et al. (1999), who argue that critical thinking involves the
ability to respond constructively to others during group discussion, which implies interacting in
pro-social ways by encouraging and respecting the contributions of others. Similarly, Heyman
(2008) indicates that social experiences can shape children‟s reasoning about the credibility of
claims. In their meta-analysis of 117 empirical studies on the effects of instructional interventions
for improving students‟ critical thinking skills and dispositions, Abrami et al. (2008) found a
small but positive and significant effect of collaborative learning approaches on critical thinking.
41
Nelson (1994) provides some clues as to how collaboration can prompt cognitive development
among college students. According to Nelson, students‟ misconceptions interfere with their ability
to acquire new knowledge, despite appropriate instruction. Collaborations create opportunities for
disagreements and misconceptions to surface and to be corrected. Collaboration also provides a
vehicle for students to attain necessary acculturation to the college learning environment and
helps to make tacit disciplinary expectations more explicit for students.
Nelson (1994) points out that collaboration must be scaffolded, arguing that this scaffolding
process has three stages. First, students must be prepared for collaboration by providing them
with a common background on which to collaborate, such as common assigned readings. Second,
student groups should be provided with questions or analytical frameworks that are more
sophisticated than they would tend to use on their own. Finally, collaborative activities should be
structured by specifying student roles and by creating incentives for all group members to actively
participate. Bonk and Smith (1998) identify a number of classroom activities that build on the
potential for collaboration to enhance learning. These activities include think-pair-share, roundrobin discussions, student interviews, roundtables, gallery walks, and “jigsawing.”
“… one distinguishing characteristic of high-achieving college students is that they tend to reflect
on their thought processes during group learning and are aware of the cognitive strategies they
use” (Weinstein and Underwood, 1985). Critical thinking is emphasized in actively exploring
ideas, listening to others, and carefully evaluating alternative points of view. Students learn to
examine their own opinions more analytically and relate their views to those of others,
contributing to their development into a community of concerned thinkers and writers (Chaffee,
McMahon and Stout, 2003). Therefore, developing small-group discussion questions related to
the idea or content being studied (see the course material chapters for the treatment group) to
promote critical thinking through the students‟ writing is an integral part of this study.
In addition to explicit instruction and collaboration, several other strategies have been identified
as helpful in promoting critical thinking. For example, teachers are urged to use constructivist
learning methods, characterized as more student-centered than teacher centered (Bonk and Smith,
1998; Paul, 1992). Constructivist instruction is less structured than traditional instruction,
amplifying students‟ roles in their own learning and de-emphasizing the role of the teacher.
42
Educators should model critical thinking in their own instruction by making their reasoning
visible to students. This could be accomplished by “thinking aloud” so that students can observe
the teacher using evidence and logic to support arguments and assertions (Facione, 2000; Paul,
1992). Educators are also argued to use concrete examples that will be salient to students to
illustrate abstract concepts like “conflict of interests” (Heyman, 2008; Paul, 1992). For example,
Heyman found that children were more likely to be skeptical of another child‟s claim of illness
when they learned that the child did not want to attend camp that day. Examples that rely up on
common experiences are more likely to be intuitively obvious to students. specific classroom
learning activities believed to promote critical thinking include the creation of graphic organizers,
such as concept maps and argument diagrams (Bonk & Smith, 1998; Van Gelder, 2005); KWL
charts, which require students to identify what they already Know about a topic, What they want
to know, and what they have Learned upon completing instruction; “in a nutshell” writings, which
entail summaries of arguments; exit slips, which identify the most important thing learned and the
areas of needed clarity; problem-based learning, particularly the use of ill-structured problem
contexts; and mock trials (Bonk & Smith, 1998).
The Delphi study also reported several critical thinking teaching techniques (Facione, 1990),
found to improve critical thinking either anecdotally or experimentally include: 1) teaching
students what each skill is, and when and how to use it, 2) modeling appropriate and logical
reasoning, 3) justifying why critical thinking is important, 4) allowing students time to practice
each skill, utilizing both oral and written techniques with evaluation as a crucial component
Modeling logical reasoning and allowing time for discussion of these concepts in class generally
requires a reduction in course content (McKendree, 2002). As this is a relatively new method of
teaching, student attitudes were shown to improve if course goals and expectations are made
explicitly clear (Chapman, 2001). Another suggestion for modeling and teaching critical thinking
is for the instructor to admit their own biases up front, and encourage students to become aware of
their own viewpoints (Facione, 1996).
Critiques of written articles and opinion pieces are frequently used in courses as critical thinking
assignments that allow students to practice understanding and analyzing other‟s logic (Dlugos,
2003; Chen and Lin, 2003). Students are asked to lead the course discussion of assigned readings
43
to help encourage personal responsibility and involvement in the learning process (Ahern-Ridell,
1999). Course journals helped students become aware of their thinking, and group problem
solving may also help students see the myriad of solutions (and consequences of proposed
solutions) available during problem solving (Peters et al., 2002; Wagner, 1999). Peer critiques of
students‟ own critical thinking shown to be a successful tool when students were given explicit
instructions and guidelines and the opportunity to practice critiquing each other‟s writing
frequently (McPherson, 1999). Most of these techniques involve writing as the primary form of
communication; Wade (1995) found that writing promotes greater self-reflection and depth of
logic compared to oral communication and may be the best medium for students to express their
critical thinking skills. An important factor in critical thinking assessment and instruction, the
question of which communication tool to utilize (written versus oral) was addressed in the course
and this research by combining student written assessments and by encouraging oral discussion as
well as written assignments can help students practice and develop their critical thinking skills(see
Chapter 3: the course designed for this study).
More recent literature and findings on teaching and learning critical thinking strongly argue that
critical thinking can be developed among undergraduate students, especially if critical thinking
instruction and practice is embedded across the curriculum (Halpern, 1998; Pascarerella, 1999;
Tsui, 2002; Kelly-Riley et al., 2007; Mesher, 2007). Effective teaching and learning strategies,
therefore, include: (1) analytical writing/re-writing (Tsui, 2002); (2) directed class discussions by
students (Tsui, 2002); (3) practice of retrieval and implementation cues (Halpern, 1998;
Klaczynski, 2001); (4) practice in transfer to other contexts (Halpern, 1998; Klaczynski, 2001;
ven Gelder, 2005), and (5) challenges to students‟ thinking, that is, students need to have their
thinking challenged, especially by other students( Jungst et al., 2003; DeRoma et al., 2003), and
perhaps even across cultures (Harrigan and Vincenti, 2004). Moreover, some suggest that
instructors may need to sacrifice course content in order to allow for critical thinking skill practice
(Dickerson, 2005; Valiga, 2003). Warren et al. (2004) also encourage instructors to work to
develop discipline-specific reasoning skills within the context of a given discipline. The wider the
critical thinking emphasis is on a given campus, the more likely it is that these skills will be
developed and be transferable to other contexts (Halpern, 1998; van Gelder, 2005).
44
“To cultivate the intellect requires developing intellectual skills, tools of mind that enable the
thinker to reason well through any question or issue, to think through complexities and
confusions, to emphasize with competing viewpoints and world views. It requires, in short, the
tools of critical thinking” (Paul & Elder, 2009a, p.286). Four useful ways to integrate critical
thinking into the curriculum, according to Hayes and Devitt(2008) are the inclusion of problem
solving, asking question that require critical analysis, evaluating sources and decision making
(p.66). Bernasconi (2008), for example, challenged his students to see reading as a process; he
encourages students to read text more than once and as they do so, to question “the text to
determine the author‟s argument and the text‟s stylistic choices and structure. Students also learn
annotating, summarizing, and descriptive outlining, skills crucial to making meaning from a text”
(p.17). Mendelmen(2007) suggested an image-concept approach in an attempt to transition from
the tangible to intangible; while reading text, Mendelmen asked her students to identify all images
and concepts present, and after this is mastered, she challenges her students to move from verbal
analysis, to written analysis communicating tangibles and intangibles present in the work(p.301).
Thein, Oldakowski, and Sloan (2010) advocated a “model of inquiry-based English instruction…
designed to help students understand the constructed nature of lived and text world and to critique
the message they forward” (p.24). The intent is to make students more aware of who they are,
how they live, and their impact on the world. Beyer (2008) advised that one of the most effective
ways to teach critical thinking is to “make these components explicit – obvious, specific, clear
and precise. When we make as explicit as possible how and why, step by step, to carry out a skill
efficiently and effectively, we enable our students to become more conscious of how and why
they….actually „do‟ that skill” (p.197). Regardless of the specific approach being used, “when
students engaged in critical evaluation of problems via classroom discussion, their critical
thinking strategies improve” (Hayes & Devitt, 2008, p.66).
Another factor influencing teacher efficacy and the subsequent success of teaching critical
thinking may be the learning institution itself. Using interviews and observations, Tsui (2001)
conducted a qualitative study examining teacher attitudes towards critical thinking and found that
more selective institutions that promote divergent thinking, higher levels of student responsibility,
self-reflection and greater social and political awareness are more probable sites for critical
thinking achievement. However, a flaw in Thui‟s methodology is the absence of an assessment of
critical thinking in students; he relies instead on comparison between pedagogical theories and
45
practices of different undergraduate institutions. Tsui assumes that institutions with the teaching
philosophies listed above are more likely to teach students critical thinking; he correlates this
assumption with qualitative teacher interviews to document the relationship between teacher‟s
belief in the ability of students to think critically, the pedagogical focus of the school, and student
SAT scores.
According to Hove (2011), yet another factor in the efficacy of critical thinking institution is the
students themselves. Students who have become accustomed to an extrinsically motivated grading
and learning system may feel uncomfortable with the new teaching philosophies and learning
expectations. In addition, students may have the capacity to think critically, but their decision to
engage these skills may be mediated by other factors; if the grading process does not require
critical thinking, students may rarely engage in these processes. A study by Bullock et al. (2002)
compared students‟ and instructors‟ perceptions of critical thinking in England and found that the
pressure to „get good grades‟ (assuming these were related to content retention and not thinking
processes) overshadowed students‟ desire to engage in critical thinking. Ishiyama et al(1999)
found that students who scored high on a critical thinking disposition instrument actually
preferred lecture-based instructional methods to group work- although it was not clear why this
was so (perhaps advanced students felt group work to be frustrating when paired with their less
cognitively-aware classmates).
2.8. Attributes a Critical Thinker Needs to have
The attributes of a critical thinker, in other words, the habit and ability that a critical thinker is
supposed to have is important to identify in the teaching and learning processes. There is general
consensus among theorists, philosophers, and psychologists that the attributes of a critical thinker
consist of the skills and dispositions to think critically (Facione and Facione, 1997; Ennis, 1996;
Paul, 1993; Fisher, 1991; Byrne and Johnstone, 1987; Meyers, 1986; D‟ Angelo, 1971).
2. 8.1 Critical Thinking Skills
As for the nature of critical thinking skills, different opinions prevail. For example, Brookfield
(1987) cites four abilities that he considers to be essential for critical thinking while D‟Angelo
(1971) proposes a total of fifty critical thinking skills. Among the many critical thinking skills
identified, Sander (1992) notes that four are frequently cited by the various writers. These include
46
the ability to recognize stated and unstated assumptions, draw valid conclusions, judge validity of
inferences, and solve problems. Assumption is what one takes for granted without the need to
provide evidence as justification (Dressel and Mayhew, 1954). “The presence and nature of
assumptions within an argument determines whether or not the conclusions reached are
acceptable” (Sander, 1992), therefore, the ability to recognize stated and unstated assumptions is
vital to a critical thinker. Sander in her study defines valid conclusions as those that really do
follow from the evidence, as valid conclusions are a product correct reasoning, the ability to draw
valid conclusions reflects the quality of one‟s thinking. Further, in order to judge the validity of
inferences, one has to have the ability “to discern when conclusions reached are based on
common beliefs or personal preconceptions rather than on the collection of evidence” (Sander,
1992). A critical thinker must be able to judge whether the reason offered in support of a
conclusion is acceptable and sufficient to establish the conclusion. Finally, to be able to solve
problem, a critical thinker should have the ability to “identify, clarify, and evaluate perplexities”
(Sander, 1992, p.26). This includes the ability to collect relevant data, make judgment, develop
alternatives, and evaluate outcome in relation to the problem identified.
From the Delphi Report on Critical Thinking (American Philosophical Association, 1990) as
discussed earlier, core critical thinking skills have been derived. These are the skills of analysis,
inference, and evaluation (Facione and Facione, 1997). The authors describe analysis as the
ability to “comprehend and express the meaning or significance of a variety of materials,
situations, expressions, and to identify the intended and actual inferential relationships among
statements, questions, concepts, beliefs, or judgments” (p.9). Inference is described as the ability
to “identify and secure the elements needed to draw reasonable conclusions, to form conjunctures
and hypotheses, to consider relevant information, and to deduce the most reasonable
consequences which follow, either most probably or necessarily, from those elements”(p.9). And
the critical thinking skill of evaluation is depicted as the ability to “assess the credibility of the
statements and the logical strength of inferential relationships and to be able to justify one‟s
reasoning by reference to relevant evidence, concepts, methods, contexts, or standards” (p.9). The
report also portrays critical thinking as a non-linear, recursive process expressed as “a cognitive
process in which one interprets one‟s inferences, evaluates one‟s interpretations, explains one‟s
evaluations…….any combination, in a simultaneous or recursive manner that scientists have yet
to easily chronicle or document”(Facione, 1995, p. 3).
47
Glaser (1941) also points out that almost everyone who has worked in the critical thinking
tradition has produced a list of thinking skills which they see as basic to critical thinking and he
listed the abilities:
(a) to recognize problems, to find workable means for meeting those problems,
b)understand the importance of prioritization and order of precedence in
problem solving , (c) to gather and marshal pertinent(relevant) information,
(d) to recognize unstated assumptions and values, (e) to comprehend and use
language with accuracy, clarity, and discernment(clear understanding), (f) to
interpret data, to upraise evidence and evaluate statements(arguments), (g) to
recognize the existence(or non-existence) of logical relationships between
propositions, (h) to draw warranted conclusions and generalizations, (i) to put
to test the generalizations and conclusions at which one arrives, (j) to
reconstruct one’s patterns of beliefs on the basis of wider experience, and (k)
to render accurate judgments about specific things and qualities in everyday
life ( p. 6 ).
In addition, the list of core critical thinking skills according to Glaser(1941) includes observation,
interpretation, analysis, inference, evaluation, explanation and meta-cognition, and there is a
reasonable level of consensus among experts that an individual or group engaged in strong critical
thinking gives due consideration to:
 Evidence (like investigating evidence) through observation
 Context
 Relevant criteria for making the judgment well
 Applicable methods or techniques for understanding the problem and the question at hand.
In addition to possessing strong critical thinking skills, one must be disposed to engage problems
and decisions using those skills. Critical thinking employs not only logic but broad intellectual
criteria such as clarity, credibility, accuracy, precision, relevance, depth, breadth, significance and
trainees (Facione and Facione, and Giancarlo, 2000; Sims, 2009; Elder and Paul, 2006/8).
48
2.8.2 Critical Thinking Dispositions
Underlying the abilities to think critically are certain dispositions, which are the combinations of
attitudes and inclinations. According to Facione (2004), critical thinking disposition is the
attitudinal basis for the internal motivation to think critically. Like the critical thinking skills, an
array of dispositions has been suggested. The frequently cited dispositions, according to Sander
(1992) and Halpern (1998) include questioning mind, intellectual curiosity, objectivity, openmindedness, and systematicity. Despite the recognition given to the dispositional aspect of critical
thinking, there is a lack of empirical evidence in this important area. Hopefully, the situation may
soon improve with the construction of the California Critical Thinking Disposition Inventory
(CCTDI), which is designed specifically for the measurement of critical thinking dispositions
(Facione, Facione and Sanchez, 1994). Derived from the Delphi Experts Consensus‟ Report on
Critical Thinking(American Philosophical Association, 1990), the seven dispositions in the
CCTDI considered as essential for critical thinking are truth-seeking, open-mindedness,
analyticity, systematicity, critical thinking self-confidence, inquisitiveness, and cognitive
maturity(Facione and Facione, 1997). According to Facione and Facione, truth-seeking refers to
intellectual honesty and the desire for the best knowledge. Open-mindedness is the tolerance for
new idea and divergent views. Analyticity is being alert to the need to use reason and evidence to
solve problems. Systematicity is the inclination to be organized, focused, and persevering. Critical
thinking self-confidence is the trust one has in one‟s own reasoning. Inquisitiveness is the
intellectual curiosity that one has for learning. cognitive maturity is the judiciousness that enables
one to see the complexity in problems.
The importance of critical thinking dispositions has been highlighted by a number of critical
thinking theorists. For instance, Byrne and Johnstone (1987) maintain that although one may
possess critical thinking skills, it would require propensities to exercise them. Kennedy, Fisher
and Ennis (1991) contend that the possession of critical thinking skills would not lead to rational
reflective thinking unless they are used in conjunction with the appropriate dispositions. Paul
(1993) too expresses his concern that without the necessary dispositions, critical thinking skills
alone would only lead to close-mindedness. The meta-study conducted by Facione and
Facione(1997) has made a start to explore critical thinking dispositions in a systematic manner. It
is hoped by the researcher that this study may follow this lead.
49
2.9 Teaching Critical Thinking for Transfer across Domains
Advances in technology and changes in necessary work-place skills have made the ability to think
critically more important than ever before, yet there is ample evidence that many adults
consistently engage in flawed thinking. Numerous studies have shown that critical thinking,
defined as “the deliberate use of skills and strategies that increase the probability of a desirable
outcome,” can be learned in ways that promote transfer to novel contexts (Halpern, 1998).
According to Halpern (1996), there are numerous, qualitatively different types of evidence
showing that students can become better thinkers as a result of appropriate instruction. Indicative
of positive change include self-reports, gains in adult cognitive development, higher scores on
commercially available and research versions of tests of critical thinking, superior responses to
novel open-ended questions(graded blindly-without the rater knowing if the student received
instruction in critical thinking), and changes in the organization of information, among others.
The goal of instruction designed to help students become better thinkers is transferability to realworld, out-of-the-classroom situations. With this goal in mind, the ideal learning assessment
would occur naturally in the course of one‟s life, in multiple settings, and would provide
comparable measures before, during, and long after the instruction. It would describe what an
individual thinks and does when reading a newspaper editorial, selecting a career objective, or
voting on a bond issue at times when the individual is not aware of being assessed. Unfortunately,
this sort of intrusive and surreptitious assessment is not feasible, but some clever attempts have
come close. Lehman and Nisbell (1990), for example, examined the spontaneous transfer of
selected thinking skills in an out-of-the-classroom, real-world environment. They phoned students
at home several months after the completion of their course work and posed questions under the
guise of a household survey. Results were supportive of the idea that the students had learned and
spontaneously used the thinking skills that had been taught in their college classes when the
questions were asked in an ecologically valid setting(their own homes), with novel topics, several
months after the semester had ended. This sort of assessment provides evidence that critical
thinking can be learned with appropriate instruction and that it can and does transfer to novel
domains of knowledge. There are numerous other successful reports of the transfer of critical
thinking skills to a variety of settings (Kosonen and winne, 1995; Nisbell, 1993; Perkins and
Grotzer, 1997). However, the following four-part empirically based model is proposed by Halper
(1998/2001) to guide teaching and learning for critical thinking.
50
In critical thinking instruction, as was described before in this study, the goal is to promote the
learning of trans-contextual thinking skills and the awareness of and ability to direct one‟s own
thinking and learning. Although thinking always occurs within a domain of knowledge, the usual
methods that are used for teaching content matter are not optimal for teaching the thinking skills
that psychologists and other educators want students to use in multiple domains, because
instruction in most courses focuses on content knowledge(as might be expected) instead of the
transferability of critical thinking skills. For this reason, instruction in critical thinking poses
unique problems. Fortunately, there already are powerful models of human learning that can be
used as a guide for the redesign of education for thinking. The basic principles of these models
are taken from cognitive psychology, the empirical branch of psychology that deals with
questions about how people think, learn, and remember, or more specifically, how people acquire,
utilize, organize, and retrieve information(Halpern, 1998).
It is clear that a successful pedagogy that can serve as a basis for the enhancement of thinking will
have to incorporate ideas about the way in which learners organize knowledge and internally
represent it and the way these representations change and resist change when new information is
encountered. Despite all of the gains that cognitive psychologists have made in understanding
what happened when people learn, most teachers do not apply their knowledge of cognitive
psychology (Schoen, 1983). Practice in transfer of critical thinking to other contexts include a
number of strategies (Halpern, 1998; Klaczynski, 2001; ven Gelder, 2005).
The model that is proposed by Halpern (1998) for teaching and learning critical thinking skills so
that they will transfer across domains of knowledge consists of four parts: (1) a dispositional or
attitudinal component to prepare learners for effortful cognitive work, (2) instruction in and
practice with the skills of critical thinking, (3) structure-training activities designed to facilitate
transfer across contexts, and (4) a metacognitive component used to direct and assess thinking for
accuracy and progress toward the goal.
2.9.1 Dispositions for Effortful Thinking and Learning
Critical thinking is more than the successful use of a particular skill in an appropriate context. It is
also an attitude or disposition to recognize when a skill is needed and the willingness to apply it.
51
Sears and Parsons (1991) called these dispositions the ethic of a critical thinker. There are large
differences among cognitive tasks in the effort that is required in learning and thinking. For
example, most people effortlessly learn the plot of a television sitcom they are watching, but they
need to expend concerted mental effort and cognitive monitoring to learn how to analyze complex
arguments or how to convert a word problem into a spatial display. Similarly, routine problems
tend to be solved with habitual solutions, sometimes so effortlessly that the problem solver has no
conscious awareness. By contrast, critical thinking requires the conscious exertion of mental
effort. In other words, it is cognitive work. Learners need to understand and be prepared for the
effortful nature of critical thinking. So they do not abandon the process too soon, believing that
the thinking should have been easier or accomplished more quickly. The development of
expertise in any area requires deliberate, effortful, and intense cognitive work (Wagner, 1997).
Not surprisingly, critical thinking is no exception to these general principles.
According to Halpern (1998), however, it is important to separate the disposition or willingness to
think critically from the ability to think critically. Some people may have excellent criticalthinking skills and may recognize when the skills are needed, but they also may choose not to
engage in the effortful process of using them. This is the distinction between what people can do
and what they actually do in real-world contexts. It is of no value to teach students the skills of
critical thinking if they do not use them. Good instructional programs help learners decide when
to make the necessary mental investment in critical thinking and when a problem or argument is
not worth the effort. An extended session of generating alternatives and calculating probabilities
is a reasonable response to a diagnosis of cancer: it is not worth the effort when the decision
involves the selection of an ice-cream flavor.
A critical thinker exhibits the following dispositions or attitudes: (a) willingness to engage in and
persist at a complex task, (b) habitual use of plans and the suppression of impulsive activity, (c)
flexibility or open-mindedness, (d) willingness to abandon nonproductive strategies in an attempt
to self-correct, and (e) an awareness of the social realities that need to be overcome (such as the
need to seek consensus or compromise) so that thoughts can become actions(Halpern, 1998,
p.452).
52
2.9.2 A Skills Approach to Critical Thinking
According to Halpern(1998), critical thinking instruction is predicated on two assumptions: (a)
that there are clearly identifiable and definable thinking skills that students can be taught to
recognize and apply appropriately and (b) if these thinking skills are recognized and applied, the
students will be more effective thinkers. A general list of skills that would be applicable in almost
any class would include understanding how cause is determined, recognizing and criticizing
assumptions, analyzing means- goals relationships, giving reasons to support a conclusion,
assessing degrees of likelihood and uncertainty, incorporating isolated data into a wider
framework, and using analogies to solve problems(pp.452-453)
A short taxonomy of critical-thinking skills is proposed as a guide for instruction: (a) verbal
reasoning skills_ This category includes those skills needed to comprehend and defend against the
persuasive techniques that are imbedded in everyday language; (b) argument analysis skills_ An
argument is a set of statements with at least one conclusion and one reason that supports the
conclusion. In real-life settings, arguments are complex, with reasons that run counter to the
conclusion, stated and unstated assumptions, irrelevant information, and intermediate steps; (c)
skills in thinking as hypothesis testing_ The rationale for this category is that people function like
intuitive scientists to explain, predict, and control events. These skills include generalizability,
recognition of the need an adequately, large sample size, accurate assessment, and validity,
among others; (d) likelihood and uncertainty_ Because very few events in life can be known with
certainty, the correct use of cumulative, exclusive, and contingent probabilities should play a
critical role in almost every decision; (e) decision making and problem solving skills_ In some
sense, all of the critical-thinking skills are used to make decisions and solve problems, but the
ones that are included here involve generating and selecting alternatives and judging among them.
Creative thinking is subsumed under this category because of its importance in generating
alternatives and restating problems and goals.
The categories and skills listed in this taxonomy have face validity and, thus, can be easily
communicated to the general public and students. They represent one possible answer to the
question of what college graduates need to know and be able to do so that they can compete and
cooperate in the world‟s market place and function as effective citizens in a complex democratic
community. Taken together, these five categories (sometimes referred to as “macro-abilities”)
53
define an organizational rubric for a skills approach to critical thinking. They have the benefit of
focusing on skills that are teachable and generalizable and, therefore, would help to bridge the gap
between thinking skills that can be taught in college and those skills that are needed in the
workplace (Halpern, 1998).
The consensus definition of critical thinking discussed in the previous section of this chapter and
derived from the APA Delphi study provides an easily accessible terminology for discussing
human thinking and habits of mind and for communicating the importance of critical thinking in
training programs. Experts agreed on six core skills that should be taught to encourage critical
thinking in students (Facione, 1998) (from: “CRITICAL THINKING: A STATEMENT OF EXPERT
CONSENSUS FOR PURPOSES OF EDUCATIONAL ASSESSMENT AND INSTRUCTION”. CCTST
Test Manual @ 2013 Insight Assessment, pp. 67-72).
The consensus descriptions of core critical thinking skills and sub-skills (Facione, 1998) are:
1. INTERPRETATION: The ability to comprehend and express the meaning or significance of a
wide variety of experiences, situations, data, events, judgments, conventions, beliefs, rules,
procedures or criteria; to break information down into appropriate categories, correctly
paraphrase the meaning of a passage, to identify the purpose of the information.
Interpretation includes categorization, decoding significance, and clarifying meaning.
2. ANALYSIS: The ability to identify the intended and actual inferential relationships
among statements, questions, concepts, descriptions or other forms of representation
intended to express beliefs, judgments, experiences, reasons, information, or opinions.
Analysis includes examining ideas, detecting arguments, and analyzing arguments.
3. EVALUATION: The ability to assess the credibility of statements or other
representations which are accounts or descriptions of a person‟s perception, experience,
situation, judgment, or opinion; and to assess the logical strength of the actual or intended
inferential relationships among statements, descriptions, questions or other forms of
representation. Evaluation includes assessing claims and assessing arguments.
54
4. INFERENCE: The ability to identify and secure elements needed to draw reasonable
conclusions; to form conjectures and hypotheses; to consider relevant information and to
educe the consequences flowing from data, statements, principles, evidence, judgments,
beliefs, opinions, concepts, descriptions, questions, or other forms of representation.
Inference includes querying evidence, conjecturing alternatives, and drawing conclusions.
5. EXPLANATION: The ability to state the results of one‟s reasoning; to justify that
reasoning in terms of the evidential, conceptual, methodological, criteriological and
contextual considerations upon which one‟s results were based; and to present one‟s
reasoning in the form of cogent arguments. Explanation includes stating results, justifying
procedures, and presenting arguments.
6. SELF-REGULATION: The ability to self-consciously monitor one‟s cognitive activities,
the elements used in those activities, and the results educed, particularly by applying skills
in analysis and evaluation to one‟s own inferential judgments with a view toward
questioning, confirming, validating, or correcting either one‟s reasoning or one‟s results.
Self-regulation includes self-examination and self-correction.
While a number of the experts involved in the Delphi study did not want to include a dispositional
element in the definition of critical thinking because these constructs seemed interwoven and
difficult to identify and teach (Facione, 1990), others argued that dispositions help provide a
complete view of critical thinking and inform the likelihood that a person will use critical thinking
when solving a problem (Giancarlo and Facione, 2001). Engaging problems and making decisions
using critical thinking involves both skills and habits of mind. A strong critical thinker is one who
is both disposed to think critically and has the skills to do so.
According to the APA Delphi study description, the ideal critical thinker is habitually inquisitive,
well-informed, honest in facing personal biases, prudent in making judgments, willing to
reconsider, clear about issues, orderly in complex matters, diligent in seeking relevant
information, reasonable in the selection of criteria, focused in inquiry, and persistent in seeking
results which are as precise as the subject and the circumstances of inquiry permit (2013, p.73).
Research indicates that the disposition toward critical thinking can be understood in terms of
55
positive habits of mind. A person or group strongly disposed toward critical thinking is habitually
truth-seeking, open-minded, analytical, systematic, inquisitive, confident in reasoning, and
judicious.
2.9.3 Structure Training to Promote Transfer
When one is teaching for critical thinking, the goal is to have students not only understand and
successfully use the particular skill or strategy being taught but also be able to recognize where
that particular skill might be appropriate in novel situations. Hummel and Holyoak (1997)
identified structure sensitivity as a fundamental property that underlies human thought: “First
thinking is structure sensitive, Reasoning, Problem solving, and learning…. depend on a capacity
to code and manipulate relational knowledge” (p.427). Thus, when one is teaching for the transfer
of thinking skills, one should ensure that the structural aspects of problems and arguments are
made salient so that they can function as retrieval cues. When critical thinking skills are taught so
that they transfer appropriately and spontaneously, students learn to actively focus on the
structure of problems or arguments so the underlying characteristics become salient, instead of the
domain specific surface characteristics. On the basis of what is already known about adults‟
learning, students need spaced practice with different sorts of examples and corrective feedback
to develop the habit of “spontaneous noticing.” Learning should be arranged to facilitate retrieval
of skills in a way that does not depend on the content area (Halpern, 1998/2001; Klaczynski,
2001)
Learning tasks, like real-world thinking tasks, should be rich in information. Some of the
information available may not be relevant, and part of the learning exercise involves deciding
which information is important to the problem. What is important in the teaching and learning of
critical-thinking skills is what the learners are required to do with the information. Learning
exercises should focus on the critical aspects of the problems and arguments that utilize the skills.
The tasks should require thoughtful analysis and synthesis. For example, the repeated use of
“authentic” materials, or materials that are similar to real-world situations, is one teaching
strategy to enhance transfer (Derry, Levin, and Schauble, 1995). Thinking skills need to be
explicitly and consciously taught and then used with many types of examples so that the skill
aspect and its appropriate use are clarified and emphasized. Examples of relevant tasks and
56
questions that require learners to attend to structural aspects of a problem or argument according
to Derry, Levin, and Schauble (1995) are presented as follows:
 Draw a diagram or other graphic display that organizes the information. (This sort of
task makes the structure of a problem or argument clear)
 What additional information would you want before answering the question? (This
requires the thinkers-learners to think about what is missing from the information that is
given.)
 Explain why you selected a particular multiple-choice alternative. Which alternative is
second best? Why? (The giving of reasons is a good way to focus on the thinking that
went into an answer rather than the answer itself.)
 State the problem in at least two ways. (Most real-world problems are fuzzy, that is, they
really are potentially many problems, each with its own possible solution.)
 Which information is most important? Which information is least important? Why?
(This question focuses the learners‘ attention on the value of different sorts of
information.)
 Categorize the findings in a meaningful way. (By grouping or labeling individual pieces
of information, a structure emerges that is not apparent when they are kept separate.)
 List two solutions for the problem. (This encourages a more creative approach.)
 What is wrong with an assertion that was made in the question? (This reminds the
learners that problems often contain misleading information.)
 Present two reasons that support the conclusion and two reasons that do not support the
conclusion. (Questions of this sort do not permit black-and-white reasoning.)
 Identify the type of persuasive technique that is used in the question. Is it valid, or is it
designed to mislead the reader? Explain your answer. (Learners are required to consider
the motives and credibility of their information source when responding to these
questions.)
 What two actions would you take to improve the design of a study that was described?
(Learners need to think about better types of evidence or procedures that might have
provided different results.) ( cited in Halpern, 1998, p.454 ).
57
2.9.4 Metacognitive Monitoring
Metacognition is the executive or “boss” function that guides how adults use different learning
strategies and make decisions about the allocation of limited cognitive resources. The term is
usually defined as “what we know about what we know” and the ability to use this knowledge to
direct and improve the thinking and learning process. It refers to the self-awareness and planning
functions that guide the use of thinking skills. When engaging in critical thinking, students need
to monitor their thinking process, checking whether progress is being made toward an appropriate
goal, ensuring accuracy, and making decisions about the use of time and mental effort.
Metacognitive monitoring skills need to be made explicit and public so that they can be examined
and feedback can be given about how well they are functioning. A few explicit guiding questions
can be used as a way of converting what is usually an implicit process into an explicit one. For
example, students can be given a problem or an argument to analyze and then asked the following
questions before they begin the task: (a) How much time and effort is this problem worth? (b)
What do you already know about this problem or argument? (c) What is the goal or reason for
engaging in extended and careful thought about this problem or argument? (d) How difficult do
you think it will be to solve this problem or reach a conclusion? (e) How will you know when you
have reached the goal? (f) What critical thinking skills are likely to be useful in solving this
problem or analyzing this argument? As students work on the problem or argument, they should
be asked to assess their progress toward the goal. (g) Are you moving toward a solution? Finally,
when the task is completed, the students should be asked to judge how well the problem was
solved or how well the argument was analyzed. Well-structured questions will help students
reflect on their learning and may provide insights that will be useful in the future (Halpern,
1996/8).
2.10. The Relationship of Critical Thinking to Other concepts
As a way of defining the concept of critical thinking, many researchers have drawn connections to
other skills commonly identified as twenty-first century skills. This, according to Lai
(2011,pp.18-21), includes Metacognition, Motivation, and Creativity. Each of these related
concepts will be discussed separately.
58
2.10.1. Metacognition
Metacognition has been defined most simply as “thinking about thinking.” Other definitions
include

“the knowledge and control children have over their own thinking and learning activities”
(cross & Paris, 1988, p.131);

“awareness of one‟s own thinking, awareness of the content of one‟s conceptions, an
active monitoring of one‟s cognitive processes, an attempt to regulate one‟s cognitive
processes in relationship to further learning, and an application of a set of heuristics as an
effective device for helping people organize their methods of attack on problems in
general” (Hennessey, 1999, p. 3); and

“the monitoring and control of thought” (Martinez, 2006,p.696).
What is the relationship between critical thinking and metacognition? Kuhn (1999) sees critical
thinking as being a form of metacognition, which includes metacognitive knowing (thinking that
operates on declarative knowledge), meta-strategic knowing (thinking that operates on procedural
knowledge), and epistemological knowing(encompassing how knowledge is produced). Likewise,
Flavell(1979) sees critical thinking as forming part of the construct of metacognition when he
argues that “critical appraisal of message source, quality of appeal, and probable consequences
needed to cope with these inputs sensibly” can lead to “wise and thoughtful life decisions”(p.
910). On the other hand, Van Gelder(2005) and Willingham(2007) appear to perceive
metacognition as being subsumed under critical thinking when they argue that a component
critical thinking skill is the ability to deploy the right strategies and skills at the right time,
typically referred to as conditional or strategic knowledge and considered part of the construct of
metacognition(Kuhn and Dean, 2004; Schraw et al., 2006). Halonen(1995) identifies
metacognition as the ability to monitor the quality of critical thinking. Similarly, Halpern (1998)
casts metacognition as monitoring thinking and strategy use by asking the following kinds of
questions: What do I already know? What is my goal? How will I know when I get there? Am I
making progress?
Some researchers have argued that the link between critical thinking and metacognition is selfregulation. For example, the APA Delphi report includes self-regulation as one component skill of
59
critical thinking (Facione, 1990). Schraw et al. (2006) draw connections between metacognition,
critical thinking, and motivation under the umbrella of self-regulated learning, which they define
as “our ability to understand and control our learning environments” (p.111). Self-regulated
learning, in turn, is seen as comprising three components: cognition, metacognition, and
motivation. The cognitive component includes critical thinking, which Schraw and associates
explain consists of identifying and analyzing sources and drawing conclusions.
However, others have argued that critical thinking and metacognition are distinct constructs. For
example, Lipman(1988) has pointed out that metacognition is not necessarily critical, because one
can think about one‟s thought in an unreflective manner. McPeck, on the other hand, argues that
the ability to recognize when a particular skill is relevant and to deploy that skill is not properly a
part of critical thinking but actually represents general intelligence (1990). At the very least,
metacognition can be seen as a supporting condition for critical thinking, in that monitoring the
quality of one‟s thought makes it more likely that one will engage in high-quality thinking.
2.10.2. Motivation
Critical thinking is also related to motivation. For example, most researchers view critical
thinking as including both skills, or abilities, and dispositions. The disposition to think critically
has been defined as the “consistent internal motivation to engage problems and make decisions by
using critical thinking” (Facione, 2000, p. 65). Thus, student motivation is viewed as a necessary
precondition for critical thinking skills and abilities. Similarly, Halonen notes that a person‟s
propensity, or disposition, to demonstrate higher-order thinking relates to their motivation (1995).
Halpern (1998) argues that effort and persistence are two of the principal dispositions that support
critical thinking, and Paul maintains that „perseverance‟ is one of the “traits of mind” that renders
someone a critical thinker (1992, p. 13). Thus, like metacognition, motivation appears to be a
supporting condition for critical thinking in that unmotivated individuals are unlikely to exhibit
critical thinking. On the other hand, several motivation researchers have suggested that the causal
link goes the other way. In particular, some motivation research suggests that difficult or
challenging tasks, particularly those emphasizing higher-order thinking skills, may be more
motivating to students than easy tasks that can be solved through the rote application of a predetermined algorithm (Turner, 1995).
60
2.10.3. Creativity
Finally, many researchers have made connections between critical thinking and creativity (Bailin,
2002; Bonk & Smith, 1998; Ennis, 1985; Paul & Elder, 2006; Thayer-Bacon, 2000). At first
glance, critical thinking and creativity might seem to have little in common, or even to be
mutually exclusive constructs. However, Bailin(2002) argues that a certain amount of creativity is
necessary for critical thought. Paul and Elder (2006) note that both creativity and critical thinking
are aspects of “good,” purposeful thinking. As such, critical thinking and creativity are two sides
of the same coin. Good thinking requires the ability to generate intellectual products, which is
associated with creativity. However, good thinking also requires the individual to be aware,
strategic, and critical about the quality of those intellectual products. As the authors note, “critical
thinking without creativity reduces to mere skepticism and negativity, and creativity without
critical thought reduces to mere novelty” (p.35). Paul and Elder (2006) point out that, in practice,
the two concepts are inextricably linked and develop in parallel. Accordingly, the authors believe
both creative and critical thinking ought to be integrated during instruction.
2.11 Critical Thinking Assessment
Although critical thinking has often been urged as a goal of education throughout most of this
century, for example, John Dewey‟s “How We Think” (1910), not a great deal has been done
about it(Ennis, 2001). Since the early 1980s, however, attention to critical thinking instruction has
increased significantly- with some spillover to critical thinking assessment, an area that has been
neglected even more than critical thinking instruction (Ennis, 2001). In this section, an attempt
has been made to discuss the following points: a) the purposes of critical thinking assessment, and
b) approaches to assessing critical thinking.
2.11.1 Purposes of Critical Thinking Assessment
Not only must we have a defensible elaborated definition of critical thinking when selecting,
criticizing, or developing a test, we must also have a clear idea of purpose for which the test is to
be used. A variety of possible purposes exist, but no one test or assessment procedure fits them
all. Here are some major possible purposes by Robert. H. Ennis (1993):
61
1. Diagnosing the levels of students‟ critical thinking. If we are to know where to focus our
instruction, we must “start with where they are” in specific aspects of critical thinking.
Tests can be helpful in this respect by showing specific areas of strength and weakness (for
example, ability to identify assumptions).
2. Giving students feedback about their critical thinking prowess. If students know their
specific strengths and weaknesses, their attempts to improve can be better focused.
3. Motivating students to be better at critical thinking. Though frequently misused as a
motivational device, tests can and do motivate students to learn the material they expect to
be covered on the test. If critical thinking is omitted from tests, test batteries, or other
assessment procedures, students will tend to neglect (Smith, 1991; Shepard, 1991).
4. Informing teachers about the success of their efforts to teach students to think critically.
Teachers can use tests to obtain feedback about their instruction in critical thinking.
5. Doing research about critical thinking instructional questions and issues. Without careful
comparison of a variety of approaches, the difficult issues in critical thinking instruction
and curriculum organization cannot be answered. But this research requires assessment, so
that comparisons can be made.
6. Providing help in deciding whether a student should enter an educational program. People
in some fields already use assessed critical thinking prowess to help make admissions
decisions. Examples are medicine, nursing, law, and graduate school in general. The idea
seems good, but the efficacy of existing efforts in selecting better critical thinkers has not
been established. Research needs to be done in this area.
7. Providing information for holding schools accountable for the critical thinking prowess of
their students. A currently popular purpose for testing, including critical thinking testing, is
to pressure schools and teachers to “measure up” by holding them accountable for the test
results of their students.
2.11.2. Approaches to Critical Thinking Assessment
While incorporation of critical thinking in different aspects of life has become prevalent, its
assessment gained sophisticated attention. According to Wal(1999) two main approaches can be
taken in the assessment of critical thinking: (1) by assessing critical thinking in relation to other
relevant academic skills, such as writing, oral presentation, or practical problem solving, (2) by
62
assessing critical thinking skills as a trait or individual feature of the learner, by inviting the
learner to complete an assessment scale.
Assessment remains a major concern in developing programs to enhance students‟ critical
thinking skills. Until a concept can be defined and assessed, adequate models for teaching are
difficult to develop. Despite the lack of a comprehensive theory of critical thinking, varied efforts
have been made to develop assessment tools. Three main approaches to assessing(or measuring)
critical thinking have commonly been used: (a) commercially available general knowledge
standardized tests; (b) researcher or instructor designed assessments that attempt to capture
aspects of critical thinking more directly related to the purposes of the research project or subject
of instruction, and (c) teaching students to assess their own thinking. Each of these will be
discussed with reference to its applicability to this study.
2.11.2.1 Commercially Available Critical Thinking Tests
Commercially available standardized general critical thinking tests (e.g., California Critical
Thinking Skills Test (CCTST), the Cornell Critical Thinking Tests (CCTT), and the WatsonGlaser Critical Thinking Appraisal Tests (WGCTAT)(Murphy, Conoley and Impara, 1994) have
typically relied on multiple choice responses that test major aspects of critical thinking, including
interpretation, analysis, inference, recognition of assumption, assessing credibility, and detecting
fallacies in reasoning. None have claimed to test for all aspects of critical thinking. These
instruments have been carefully developed and tested for reliability and validity, and all have
been widely used as measures for testing people‟s ability to think critically (Facione, 1986). Their
use as assessment instruments is facilitated by their ease of grading (machine scoring) and has
allowed comparison among research projects using various models of teaching for critical
thinking. On the other hand, while they test how well a student reasons from written material,
they cannot assess whether students are able to generate clear, well-supported written or oral
arguments, whether they can solve open-ended problems, or whether they have developed
dispositions to use critical thinking skills when appropriate. Some researchers, for example,
Keeley and Browne (1986), have suggested that multiple-choice tests are not valid indicators of
critical thinking ability because test-takers are not free to determine their own questions or apply
their own evaluative criteria.
63
Some researchers have also advocated that using student-generated responses, including essays, to
test adequately for critical thinking (Browne and Keeley, 1988; Norris and Ennis, 1989; Paul and
Nosich, 1992). Several general knowledge standardized essay tests for critical thinking have been
developed as alternatives to multiple-choice formats in attempts to assess students‟ abilities to
generate arguments and to capture the open-ended problem solving nature of critical thinking.
The Ennis-Weir Critical Thinking Essay Test(Ennis and Weir, 1985), the best known and most
widely used example, requires students to read an essay on an everyday issue(overnight parking
on a city street) containing numerous reasoning errors and to construct their own response. This
standardized, commercially available essay test of general critical thinking ability provides
several advantages over multiple choice tests or instructor-developed essay tests, including
student-generated responses, carefully established validity and reliability, and national
recognition. On the other hand, while standardized essay tests have included suggested standards
and criteria for grading essays, the time and cost involved in grading open-ended assessments and
the expertise required to grade them reliably has limited their use.
Other approaches to having students provide reasons for their responses and/or generate their own
responses on commercial standardized general tests of critical thinking are being studied as well.
Norris and Ennis (1989) have argued that a student‟s reasons for a particular answer must be
considered, and they have proposed follow-up multiple-choice questions that probe student
reasoning. Norris (1991) has suggested the use of verbal reports of thinking to assess multiplechoice responses. Paul and Nosich (1992) have argued for the inclusion of multiple-choice rating
items that allow students to rank, from a number of possible choices, those reasons that are more
correct. They have further suggested constructing test items so that a list of possible answers
could refer to any number of independent test items, and individual answers could be used several
times or not at all. These strategies would eliminate guessing as a factor in test scores. While
various additions to critical thinking assessments are being tested by these and other researchers,
standardized critical thinking tests that include these enhancements are not yet available
commercially.
Recent efforts have addressed the issue of critical thinking dispositions in the form of a
standardized commercially available test. Dispositions (otherwise referred to as attitudes or
intellectual traits) have been variously considered as an integral part of critical thinking or as a
64
separate but overlapping concept. The Ennis-Weir Critical Thinking Essay Test tests for some
critical thinking dispositions in combination with testing for reasoning ability (Norris and Ennis,
1989; Taube, 1997), but attention to testing for critical thinking dispositions separately from
critical thinking skills is relatively new. Halpern (1993) has pointed out that a quality assessment
must test both a student‟s critical thinking skills and whether they can use those skills without
being told to do so. The California Critical Thinking Dispositions Inventory (CCTDI), based on
the consensus theoretical model and dispositions enumerated by the Delphi Report experts, tests
for seven subsets of critical thinking dispositions using a six-point Likert scale (Facione and
Facione, 1992).
Each of these commercially available critical thinking tests is limited in its ability to adequately
assess changes in students‟ critical thinking abilities, but their careful development, standardized
scoring, and general use make them good candidates for use in educational research projects.
2.11.2.2 Alternatives to Commercial Instruments
A second approach to assessing critical thinking is researcher or instructor developed tests. Norris
and Ennis(1989) have provided examples and criteria for instructors interested in developing
assessment techniques for such purposes as testing domain-specific critical thinking, testing for
transfer, evaluating a critical thinking program, formative evaluations, or determining grades.
While teacher-made tests can and should be used within the classroom to assess critical thinking,
their use in educational research projects examining the effectiveness of various methods or
models to teach for critical thinking has major limitations. Instruments designed for a specific
experimental method or model for critical thinking may best capture its strengths, but the
resulting variety of instruments and assessment techniques has led to difficulties comparing the
results of educational studies.
Perhaps the third and most appropriate way to assess students‟ critical thinking abilities is to teach
them to assess their own thinking. Paul has written extensively on teaching students to assess their
own work, and he has argued that to the extent that students need feedback from instructors, they
have not achieved a high level of critical thinking (Foundation for Critical Thinking, 1996).
Angelo and Cross (1993) have also emphasized the importance of student self-assessment
techniques. This approach seems to comprise an integral part of teaching for critical thinking and
65
needs to be addressed more broadly by researchers. While highly appropriate for classroom use,
however, it requires a deep understanding of critical thinking and a tremendous commitment from
both the instructor and the students. Further, this method of assessment, for many obvious
reasons, does not meet the requirements of rigorous educational research.
Tests based on Bloom‟s Taxonomy (Bloom et al., 1956) of analysis, synthesis, and evaluation
have also been constructed to measure critical thinking. McDowell and Chickering‟s (1967)
experience of college questionnaire is such an example. However, there are problems in using
Bloom‟s Taxonomy to measure critical thinking. The Taxonomy is really intended as a means to
guide the selection of items for testing students‟ learning, and not for the purpose of evaluating
their responses (Biggs and Collis, 1982). Further, Bloom‟s Taxonomy is vague, which makes the
operationalization of the Taxonomy impossible (Ennis, 1993). To prove his point, Ennis identifies
at least five different ways of interpreting the concept of analysis as described in the Bloom‟s
Taxonomy, and there is nothing in common between them.
Interview is a useful way of supplementing other methods of measuring critical thinking
(Kennedy, Fisher and Ennis, 1991) although it has not been widely used in this study. Even
though the interview method is costly and special training of interviewers is required, it is an
invaluable method to elicit detailed explanation from the test taker, especially in terms of their
rationale for making certain judgments. Such an explanation helps the researcher to understand
what and how the test taker thinks, which is important for the analysis of that person‟s thinking
(Sormunen and Chalupa, 1994; Norris and Ennis, 1989; Norris and King, 1984).
Blair (1988) declares that no critical thinking test has satisfied everyone. There is a need,
therefore, to select the most appropriate approach to suit the purpose of the study. Ennis (1993)
claims that despite the inadequacy of the current state of critical thinking measurement, it is
possible to obtain an accurate assessment with careful consideration. Such considerations will
now be discussed.
2.11.3 Measuring Critical Thinking: Some Considerations
When making a decision as to how to measure critical thinking, it is important to remember that
critical thinking is not just a collection of simple skills, and various authors attested to the
66
complex nature of this construct (Facione and Facione, 1997; Rane-Szostak and Robertson, 1996;
Hickman, 1993; Furedy and Furedy, 1985). Given the complexity, it is unlikely that a single tool
can cover all the dimensions of critical thinking, therefore, a combination of measurements should
be used (Spicer and Hanks, 1995; Ennis and Norris, 1990). This has the advantage that the
strength of each measuring method is reflected in the overall assessment while the deficiency of
one method is compensated by the other. One example of combined measurements is the use of
multiple-choice test to collect objective, quantifiable data and interview to elicit rich, qualitative
information. The combination of qualitative and quantitative data has its advantage. As each type
of data reveals a different perspective of critical thinking, the advantage of assessing critical
thinking through qualitative and quantitative measurements is that it allows the assessor to view
the construct from different perspectives. For something as complex as critical thinking, the more
dimensions that can be revealed, the better the understanding.
Ennis, known for his pioneering work in critical thinking, has offered a number of suggestions for
the selection of critical thinking measuring tools (Ennis, 1993). First, is the critical thinking tool
based on a defensible conception of critical thinking? Given the doubts that have been expressed
about the WACTA, this point is particularly pertinent. Second, is the critical thinking test
comprehensive in its coverage of the concept? This should remind the researcher not to rely solely
on one single test method. Thirdly, is the test constructed at a level that is appropriate for those
taking the test? If it is at a level too advanced for the test takers, would not truly reflect their
performance in critical thinking. The above suggestions were taken seriously when deciding
which measurement tool would be used in this study.
In addition to the above suggestions, Ennis (1993) has also cautioned researchers of critical
thinking to be vigilant in the following:
 When a claim is made that the critical thinking test results are the effect of critical
thinking instruction, it should be remembered that there might be other explanations for
the results.
 Without the use of a control group, the pretest-to-posttest results should be viewed with
caution as factors other than the one(s) identified could have influenced the results.
67
 The
use of the same measuring tool for the pretest and posttest could threaten the
internal validity of the test as the test takers could become test-wise and exhibit change
not related to the intervention.
 Most critical thinking tests are not comprehensive enough and miss important aspects of
critical thinking.
 As a result of scarce resources experienced in most of the research studies on critical
thinking, compromises often have to be made which may affect the selection of
appropriate test instruments.
Recent attention to critical thinking demands that current assessment practices be revised,
discarded, or replaced. Scholars have continued to work to develop reliable, valid assessments
that test the total construct while providing efficiency in grading. At this time, no one approach is
best, and each has its limitations and merits (Ennis, 2001).
The above points, however, acted as important reminders to this researcher when considering the
methodology of this study. The choice of an experimental design and the use of more than one test
method might help this researcher to avoid or at least reduce some of the pitfalls in critical
thinking measurement. Further- discussions of the study design is included in Chapter three.
2.11.4. Measuring Critical Thinking Dispositions
Traditionally critical thinking is defined in terms of cognitive ability and skills (Tishman and
Andrade, 1996). In recent years, there is recognition that having the skills to do something does
not necessarily mean that people will use it even when the situation calls for it (Ennis and Norris,
1990). Besides having the thinking skills, good thinkers need to have the inclination to use the
thinking skills when the occasion calls for it (Tishman and Andrade, 1996).
Some researchers, such as the participants of the Delphi Project (Facione, 1990) and Ennis (1987)
extend the definition of critical thinking to include both abilities and dispositions. Researchers
define critical thinking dispositions differently and come up with their own lists of critical
thinking dispositions. Ennis (1989), for example, defines critical thinking dispositions as the
tendencies to do something given certain conditions. Tishman and Andrade (1996) define critical
68
thinking dispositions as tendencies toward particular patterns of intellectual behavior. Facione,
Facione and Giancarlo (1998) explain critical thinking dispositions as a person‟s internal
motivation to think critically when faced with problems to solve, ideas to evaluate, or decisions to
make.
Critical thinking has always been a central goal of education, but having critical thinking skills
does not necessarily mean that the person will use these skills even when the situation requires the
application of such skills (Connie, 2006). Good critical thinkers need to have both thinking skills
and the dispositions to use these skills. Educational institutions should, in addition to teaching
critical thinking skills, cultivate learners‟ critical thinking dispositions. Educators need to measure
critical thinking dispositions so that they have a means to determine whether a learners‟ poor
performance on a thinking skill is due to a lack of ability or a lack of disposition. This will help
educators to decide on the appropriate intervention to implement. Some approaches that have
been used to measure critical thinking skills include surveys, scoring rubrics and essay tests. In
this paper, different approaches to measure critical thinking dispositions are reviewed and the
pros and cons of each approach will be discussed. The discussion would be helpful to educators
who would like to measure the critical thinking dispositions of their students.
The participants of the Delphi Project (Facione, 1990) identified 7 components of critical thinking
dispositions: inquisitive, open-minded, systematic, analytical, truth-seeking, critical thinking selfconfidence, and maturity of judgment. Hence, an ideal critical thinker, according to the Delphi
Report, is described as “habitually inquisitive, well-informed, trustful of reason, open-minded,
flexible, fair-minded in evaluation, honest in facing personal biases, prudent in making
judgments, willing to reconsider, clear about issues, orderly in complex matters, diligent in
seeking relevant information, reasonable in the selection of criteria, focused in inquiry, and
persistent in seeking results which are as precise as the subject and the circumstances of inquiry
permit(Facione, 1990).” Ennis (1987) came up with 14 critical thinking dispositions which
include the tendency to be open-minded, to look for alternatives and to seek as much precision as
the subject permits.
There are other researchers who label the affective domain of critical thinking using other names.
For example, Costa and Kallick(2000) term the affective domain of critical thinking as habits of
69
mind, which refers to having the dispositions to behave intelligently when confronted with
problems with no immediate answers.
Despite the different definitions and lists of critical thinking dispositions, it is important for
students not only to pick up critical thinking skills (Facione, Facione and Giancarlo, 1997) but
also to develop the dispositions to use these skills (Facione, Sanchez, Facione and Gainen, 1985).
Only with the development of critical thinking dispositions, can students succeed in school and
throughout their lives (Halpern, 1998).
With the inclusion of the affective domain in the definition of critical thinking, there is a need for
instruments to measure critical thinking dispositions. When a learner does poorly on a thinking
test, the educator needs a way to know if the poor performance is due to a lack of abilities or
dispositions (Ennis and Norris, 1990). Only then, can educators decide on how to select and
design the appropriate intervention to implement to help the learners (Giancarlo, Blohm & Urdan,
2004).
This paper looks at the various approaches used by different researchers and educators to assess
the affective domain of critical thinking - habits of mind/critical thinking dispositions and the pros
and cons of these approaches, and select one for later use in this study.
2.11.4.1. Techniques of Evaluating Critical Thinking Dispositions
Given the various ways of labeling and defining the affective domain of critical thinking, it is no
surprise that different approaches and methods have been used to evaluate or assess critical
thinking dispositions. A survey of literature on the assessment of critical thinking dispositions
(Norris, 1992) and habits of mind(Marzano, Pickering & McTigh, 1993), for example, indicates
that critical thinking dispositions has been assessed using different approaches such as direct
observation, rating scores, learner self-assessment and essays. Some of these methods are used in
combination, for example direct observation is usually used with rating scores. A brief description
and review of these approaches is given in the following sections:
70
2.11.4.1.1 Direct Observation
In this approach, learners are observed on how they behave as they work on given tasks which
provide them with the opportunity to display the critical thinking dispositions. The learners could
be observed on how they respond to the given task or make use of given standardized prompts
and hints to complete the tasks given. Assessors will record their observations against scoring
rubrics, which consists of a list of indicators. For example, for the dispositions-“considering
different points of views”, assessors could observe how the learners seeks alternative viewpoints
from others, how willing they are to explore differing viewpoints and what kinds of questions
they ask.
Depending on the dispositions measured, different types of tasks could be given to the learners. if
learners are to be assessed on whether they are able to preserve in the face of difficulty, they
could be given tasks that are typically challenging for them. Norris(1992) presented learners with
a focused, yet open-ended problem, such as a search for living creatures on another planet. The
problem included sufficient information to provide learners with the opportunity to drive
hypotheses, interpretations, and conclusions. Norris (1992) then analyzed learners‟ responses to
determine their critical thinking dispositions. Ennis(1994) argued the assessing of critical thinking
dispositions through such guided open-ended opportunities is a promising way as learners have
the opportunity to pursue any pattern of thinking that they want in response to the given problem.
Direct observation as an approach to assess critical thinking dispositions can be considerably
successful if raters are able to observe the learners as they are engaged in the process of doing
their tasks, especially if they are able to articulate what they are thinking as they go along
(Facione, et al., 1997).
On the other hand, direct observation could be complicated, time consuming and very context
specific. As dispositions are manifested differently in different settings, reliability of using direct
observation to assess critical thinking dispositions could be deemed questionable. Another
disadvantage is that as the conditions for direct observation are usually formal, the learners‟
responses in informal situations are not observed. In addition, as scoring rubrics are frequently in
direct observation, inter-rater reliability on the use of the rubrics needs to be established.
71
2.11.4.1.2. Rating Scales
Another way to assess dispositions is those based on information derived through the use of rating
scales. The assessor who needs to be someone who knows the learners for a certain period of
time, such as teachers, parents, peers. This method was used in the Competent Children‟s Project,
a New Zealand longitudinal study on the learning dispositions of children from age 5. The authors
assessed the children‟s „being competencies‟ (communication, inquisitiveness, perseverance, peer
social skills, social skills with adults and independence) over time by asking the children‟s teacher
to describe the child on a five-point Likert scale (Carr &Claxton, 2002). For each competency, for
example „perseverance‟, there are four descriptors: keep trying till resolves problem with
puzzle/toy, persists in problem-solving when creating, good concentration span on things of
interest and makes an effort, even if unconfident. The teacher was asked to judge, over a certain
period, the extent that a given description matched the child (Carr & Claxton, 2002).
One advantage of such rating scale is that it provides comparable data across settings and they
aggregate the scores for a cohort. Such rating scales are also quick and easy for teachers to fill in.
However, as teachers do not any specific incidence, but rather based their ratings on their general
perception of students, ratings should be impressionistic. In addition, they do not encourage
detailed observations learners nor do they help the assessors to understand the kinds of activities
which will lead to the development of the dispositions.
2.11.4.1.3. Learner Self-Assessment
The third category of assessment method for evaluating critical thinking dispositions are those
based on self-report or self-assessment by learners themselves. Examples of self-assessment
instruments are: (i) surveys/ questionnaires, and (ii) reflective learning logs.
2.11.4.1.3.1. Surveys/ Questionnaires
Self-assessment instruments such as surveys or questionnaires usually consist of a statement
followed by a response continuum such as strongly agree, agree, disagree, and strongly disagreed.
The subject selects the response that best describes his/her reaction to the statement. One such
questionnaire for assessing the level of critical thinking dispositions is The California Critical
Thinking Disposition Inventory (CCTDI), which consists of 75 “agree-disagree” items to measure
72
critical thinking dispositions. For example, one of the items from the inventory is “we can never
really learn the truth about most things”. After learners have responded to all the questions, the
CCTDI provides a profile of seven critical thinking dispositions: truth-seeking, open-mindedness,
analyticity, systematicity, critical thinking self-confidence, CT inquisitiveness, and cognitive
maturity (Tishman &Andrade, 1996).
The advantage of questionnaires is that such instruments are easy to administer and score.
However, the respondents of the questionnaires could fake dispositions that they do not have, as
they might choose socially desirable responses (Ennis & Norris, 1990). In addition, questionnaires
need considerable time and effort to design and to establish the reliability and validity. One
solution is to use available questionnaires which are already available. In the area of critical
thinking dispositions, the CCTDI stands out as the main questionnaire that has been tested for
reliability and validity. The internal consistency of the instrument, based on a typical sample,
achieved a Cronbach‟s Alpha of 0.90 and the overall Cronbach‟s Alpha on the seven CCTDI subscales ranges from 0.72 to 0.80 (http://www.insightassessment.com/test-cctdi.html). (See the
„methods‟ section of this study for details).
2.11.4.1.3.2. Reflective Learning Logs
Reflective learning logs are journals within which students are encouraged to reflect regularly
their learning. To guide the learners‟ reflection, questions could be given to students. For
example, learners could be asked to describe how well they have applied certain critical thinking
dispositions to complete a given task. The assessor needs to periodically collect the reflection logs
to review students‟ responses. The teacher might also meet students individually to discuss the
responses.
Use of such journals acts as cumulative records of students‟ learning progress and allows the
assessor to gain insights to situations outside of the classroom activities, which might help to shed
some light on the dispositions of the learners (Carr & Claxton, 2002). The learning logs also
encourage reflection by the learners and hence can be part of the educational practice that
encourages dispositions (Carr & Claxton, 2002). However, such instruments tend to be time
consuming, specific to the learning activities/tasks and could be unsystematic.
73
2.11.4.1.4. Essay Tests
Marzano et al. (1993) suggest that students be given a variety of tasks and situations in which
they have the opportunities to demonstrate their understanding and to thoughtfully apply
knowledge and skills and habits of mind in a variety of contexts. They suggest that the teacher
first construct a task that allows the students to apply complex thinking skills. These assessments
should result in an observable performance or tangible product, such as essays, projects or videotaped presentations. These products are then analyzed for evidences of critical thinking
dispositions. Two examples of how essays could be used to evaluate critical thinking dispositions
are briefly described in the next two paragraphs. The first one is the Ennis-Weir critical thinking
essay test and the second one is a study done by Neo and Cheung (2005), who came up with their
own essay test and scoring rubrics.
The Ennis-Weir Critical Thinking Essay Test, which was designed to assess critical thinking
skills, is able to test for certain critical thinking dispositions such as considering alternative
possibilities or explanations. In the Ennis-Weir critical thinking essay test, learners are presented
with a letter written by a member of the public to the editor of a fictional newspaper. In the letter,
the writer makes an argument for a parking problem. Learners are asked to read the letter,
analyzed and evaluated the thinking shown and write a letter to the editor in response to each of
the paragraph. Scorers are given a scoring sheet consisting of nine descriptors(e.g., recognizing
that there are many ways of preventing accidents and that some of the writer‟s points are
valid(Ennis and Weir,1985).
Neo and Cheung (2005) assessed learners‟ critical thinking dispositions by evaluating
argumentative essays done by the subjects in their study. For their study, the authors came up
with their own scoring rubrics, based on Facione‟s list of critical thinking dispositions. The
subjects were asked to give their views on a controversial issue, but they were not informed that
scorers would look out for evidence of critical thinking dispositions. This is to prevent the
subjects from attempting to exhibit the desired dispositions in the essay. Two different scorers
then went through the essays for evidence of critical thinking dispositions. A Wilcoxon MatchedRanks test was used to analyze the data. Inter-coder reliability for coding the critical thinking
dispositions is reflected using the Kappa value. This is measured by comparing the scorings done
by the two scorers for coding the critical thinking dispositions that are displayed in the essays.
74
The kappa value for coding the indicators of different dispositions ranged from fair to excellent
(Neo & Cheung, 2005).
Limitations
Using essay tests as an approach to evaluate critical thinking dispositions has its own limitations
and advantages. As Neo and Cheung (2005) pointed out-the use of essay test to assess critical
thinking dispositions is limited by the one-way feedback. Hence, in their study, they only
focused on the dispositions of open-mindedness, analyticity, systematicity and truth-seeking.
For Ennis-Weir Critical Thinking Essay Test, it does not discriminate between the influence of
disposition and ability on performance. Hence, though it can be used to give some indications of
the critical thinking dispositions of the learners, it is nevertheless limited because it was never
meant to focus only on critical thinking dispositions.
2.12. Academic ESL/EFL Writing Skills
“Writing Personal Narratives/Opinions (“Telling” What one already knows) is not Similar to
Producing Academic writing, which requires obtaining and transforming knowledge” (Bereiter
and Scardamalia, 1985, 1987,1989)
In their examination of the writing process, Bereiter and Scardamalia (1985, 1987, 1989)
distinguished two types of writing: knowledge telling and knowledge transforming. They
explained that “telling” about personal experiences or opinions represents the easiest form of
writing production that is accessible to practically all language users, who often perform such
tasks in conversations. For example, writing assignments such as My first day in the United
States, My most embarrassing/happiest day, or My views on abortion/animal research do not
require writers to do much beyond telling what they already and simply writing down their
memories or opinions in response to the prompt. To produce an essay, writers need to organize
information, often in chronological order, according to a form appropriate within the structure of
composition and in accordance with a few prescribed conventions for discourse organization (e.g.,
overt topic markers and/or lists of reasons – my first reason, the second reason, the third reason,
… in conclusion … )that are also retrieved from memory. In the case of L2 students, such writing
tasks can be produced even within the constraints of limited vocabulary and grammar because
75
the degree of textual simplicity or complexity demonstrated in the writing is determined by the
writer.
Opinion essays, Bereiter and Scardamalia (1987) further argue, include only two main elements:
statement of belief and reason. Some assignments of this type may involve multiple reasons and,
at slightly more advanced levels of writing, anticipation of counterarguments, as is often expected
of ESL writers in L2 writing instruction dealing with what is often written arguments (Leki,
1999; Lunsford & Ruszkiewicz, 2001). Opinion writing also necessitates knowledge telling
because stating one‟s views requires little information beyond the writer‟s personal beliefs or
thoughts. In these types of essays, writers can produce text on practically any topic within their
available knowledge without external information or support. Opinion-based written assignments
or essays report personal thoughts in the form of a simple discourse organization that usually
meets the expectation of the genre.
It is important to note that the teaching of L2/EFL writing focuses predominantly on topics
purposely designed to be accessible for L2 learners. Writing prompts in many L2/EFL writing
classes are often highly predictable and may actually require students to produce personal
narratives and experiences (e.g., why I want to study in the United States, holidays in my country,
the
person
who
influenced
me
most,
my
family,
my
favorite
sport/pet/book/movie/class/teacher/relative). Opinion essays are also ubiquitous at high
intermediate and advanced levels of pre-university ESL/EFL/EAP instruction because they appear
to be pseudo-academic and are based on short readings: Please read the article/text and give your
reaction/response to (its content) on pollution/gender differences/racial discrimination/the
homeless/urban crime/TV advertising/teenage smoking/human cloning/gays/women in the
military. However, a counterproductive outcome of topic accessibility is that NNS(non-native
speaking) academically bound students have few opportunities to engage in cognitively and
linguistically advanced types of academic writing expected of them in their university-level
courses (Leki & Carson, 1997).
In addition to knowledge telling in writing, the Bereiter and Scardamalia model of writing, thus,
addresses a far more psychologically complex type of writing that they called knowledge
transforming. The authors go on to state that knowledge transforming necessitates thinking about
76
an issue, obtaining the information needed for analysis, and modifying one‟s thinking. This type
of writing leads writers to expand their knowledge base and develop new knowledge by
processing new information obtained for the purpose of writing on a topic. Knowledge
transforming is considerably more cognitively complex than knowledge telling because writers do
not merely retrieve information already available to them in memory, but derive it from reading
and integrate with that already available to become obtained knowledge.
Bereiter and Scardamalia emphasized that the knowledge telling and knowledge transforming
require different rhetorical and text-generating skills for producing written discourse. Such
important considerations of writing as content integration, expectations of the audience,
conventions and form of the genre, use of language and linguistic features (e.g., lexis and
grammar), logic of the information flow, and rhetorical organization are all intertwined in
knowledge transforming(e.g., defining terms, explaining ideas, and clarifying). In general terms,
Bereiter and Scardamalia described the classical academic model of writing expected in the
disciplines when students are required to obtain, synthesize, integrate, and analyze information
from various sources, such as published materials, textbooks, or laboratory experiments.
Advanced cognitive and information-processing tasks entailed in transforming knowledge and
demonstrating knowledge in writing place great demands on L2/EFL writers‟ language skills.
2.12.1 Teaching Academic ESL/EFL Writing Skills
Although ESL/EFL instruction to non-native speakers (NNSs) takes place in various domains of
language skills, such as reading, speaking, listening, and pronunciation, ESL/EFL learners who
undertake to become proficient writers are usually academically bound. In light of the fact that
most students who prepare to enter degree programs dedicate vast amounts of time and resources
to learn to produce written academic discourse and text, the teaching of English to academically
bound NNS students must include an academic writing component. Although it is a verifiable and
established fact that NNS students need to develop academic writing skills, ESL teachers in EAP,
intensive, and college-level writing programs do not always have a clear picture of the types of
writing and written discourse expected of students once they achieve their short-term goals of
entering degree programs. In particular, students rarely need to be proficient narrators of personal
experiences and good writers of personal stories. In fact what they need is to become relatively
good at displaying academic knowledge within the formats expected in academic discourse and
77
text. More important, NNS students‟ academic survival often depends on their ability to construct
written prose of at least passable quality in the context of academic discourse expectations
(Hinkel, 2004, p. 17).
Teaching and developing ESL/EFL written proficiency expected in general education courses and
studies in the disciplines in colleges and universities requires extensive, thorough, and focused
instruction, is very much alike to those written discourse genres and formats common in the
academy in English-speaking environments. Academic writing requires the production of written
academic (rather than personal/opinion) prose. Within some sort of assignments and tasks,
students must produce texts that are academically sophisticated enough to demonstrate their
understanding of and familiarity with the course material (Chang & Swales, 1999; Johns, 1997;
Leki & Carson, 1997).
In an important study that surveyed 77 published research reports on the effectiveness of explicit
grammar instruction, Norris and Ortega (2000) normed the results of the investigations in an
attempt to achieve consistency across various investigative and analytical methodologies. Their
meta-analysis shows that in grammar learning focused instruction of any sort is far more effective
than any type of teaching methodology based on focused exposure to L2 without explicit
teaching. They further found that focused L2 instruction resulted in large language gains over the
course of the instructional term and that the effects of the instruction seem to be durable over
time. Furthermore, Norris and Ortega explained that explicit instruction based on inductive or
deductive approaches leads to greater L2 gains than implicit instruction of any sort. Thus, given
that academically bound L2 learners need to make substantial L2 gains to begin their studies, it
seems clear that L2 grammar and vocabulary should be taught thoroughly and intensively.
As reviewed by Hinkel (2004), much recent research has shown that exposure to daily and
classroom interactions, as well as fluency-oriented instruction, does not represent an effective
pedagogical approach to developing syntactic and lexical accuracy(Chang & Swales, 1999;
Dudley-Evans and & St.John, 1998; Ellis, 2001; Jordan, 1997; Richards, 2002). Although
teachers in academic preparatory and writing programs often believe that they set out to develop
learners‟ academic reading and writing proficiencies, in actuality few are closely familiar with the
types of writing assignments and tasks that NNS students need to perform once they complete
78
their language training. A teacher of writing would do all the best to academically bound students
by preparing them for academic writing assignments, particularly those in the more common
forms the students are certain to encounter later in their studies. Within these academic
assignments and tasks, students must produce text that is academically sophisticated enough to
demonstrate their understanding of and familiarity with the course material. Yet few
ESL/EFL/EAP programs undertake to at least expose their students to various types of academic
assignments and require production of written academic (rather than personal) prose (Chang &
Swales, 1999; Johns, 1997; Leki & Carson, 1997).
2.12.2. The Importance of Teaching Academic Writing in the
University
Undergraduate students in Ethiopian colleges and universities, like their peers in other parts of the
world, are required to take general education courses in such disciplines as the EFL, EAP,
sciences, history, philosophy, psychology, and sociology prior to their studies in their chosen
majors. One implication of this structure in the context of college/university education is that the
greatest demand in students‟ language skills occurs during the first two years of their academic
careers, when they are expected to read large amounts of diverse types of academic text, write
many short and long assignments, and take numerous tests and exams.
In the academy in English-speaking countries, the purpose of written assignments and of
examinations and testing is to require students to display their knowledge and familiarity with the
course material. Examinations vary in types and formats, ranging from multiple-choice tests to
lengthy term papers, including essay tests and short essay-like responses. Outside multiple-choice
tests, a great deal of writing is expected in most undergraduate courses, and it is not unusual for
students to have to produce up to dozen written assignments per term (Horowitz, 1986a). Even
some multiple-choice tests-such as the TOEFL, ACT, or SAT – incorporate an essay component
designed to measure test takers‟ writing proficiencies. Similar experience can also be practiced in
the ESL/EFL context. The purpose of teaching EFL academic writers is to help them generate and
organize ideas into coherent essays and compositions, write academic term papers and papers,
written assignments and exams as is expected of practically all students at undergraduate and
graduate levels.
79
It is important to note that practically all writing assignments necessitate more than one writing
task, such as exposition in the introduction, followed by cause/effect or comparison/contrast
rhetorical structures, and possibly back to exposition in the conclusion. For instance, most types
of writing assignments can include summaries of published works or syntheses of multiple
sources of information or data. In this case, the writing tasks would include evaluation and
synthesis (or analysis) of information, paraphrasing, and restatement skills.
Beginning in the early 1980s, several studies undertook to investigate the types of writing
assignments and tasks required of undergraduate and graduate students in academic mainstream
courses in various disciplines, such as the natural sciences (e.g., biology, chemistry, and physics),
engineering, business, and the humanities including English (Hinkel, 2004).
2.12.3. Most Important Characteristics of Academic Writing
A survey of 155 undergraduate and 215 graduate faculty in 21 U.S. universities specifically
identified the essential NNS students‟ L2 writing skills in courses that ranged from history,
psychology, business, chemistry, and engineering (Rosenfeld, Leung, and Oltman, 2001). The
responses of undergraduate faculty (Table 2.1) clearly indicate that organizing writing to convey
major and supporting ideas and using relevant examples to support them occupy top priority in
the quality of academic discourse1 (ranks 4.19 – 4.09, respectively, out of 5).
Table 2.1. Undergraduate Faculty Assessments of Some Writing Tasks
Task Statement
Mean Importance
Rating
Organizing writing to convey major and Supporting ideas.
4.19
Use relevant reasons and examples to support a position.
4.09
Demonstrate a command of standard written English, including grammar,
3.70
phrasing, effective sentence structure, spelling, and punctuation.
Demonstrate facility with a range of vocabulary appropriate to the topic.
3.62
Show awareness of audience needs and write to a particular audience or
3.33
reader.
Note. Mean Importance Rating on a scale of 0 to 5.
In addition, demonstrating command of standard written English, “including grammar, phrasing,
effective sentence structure, spelling, and punctuation,” is another high-priority requirement(rank
3.70), as well as demonstrating “facility with a range of vocabulary appropriate for the topic”
80
(rank 3.62). On the other hand, showing awareness of audience needs and writing to a particular
audience/reader was not found to be as important (rank 3.33). In addition to the faculty,
undergraduate students ranked written discourse organization skills at 4.18; grammar, phrasing,
and sentence structure at 4.15; and appropriate vocabulary at 3.69.
Graduate faculty (Table 2.2) identified largely similar priorities in student writing with regard to
the importance of information/discourse organization and examples (ranks 4.46 and 4.34,
respectively), grammar, phrasing, and sentence structure (rank 4.06), and appropriate vocabulary
(3.74). On the other hand, graduate students ranked discourse organization and exemplification at
4.32 and 3.96, respectively; grammar, phrasing, and sentence structure at 3.83; and vocabulary
3.56 (i.e., below the importance rankings assigned by graduate faculty in practically all
categories).
In a separate subset of survey items, both undergraduate and graduate faculty also specified the
specific writing skills that in their experiences determined the success of NNS students in their
courses. For undergraduate faculty, the top three L2 writing skills included (in declining order of
importance): Discourse and information organization (2.40 out of 3); Standard written English
(i.e., grammar, phrasing, and sentence structure 2.35); Vocabulary (2.26).
Among graduate faculty, the top three skills essential for success in academic courses consisted
of: Information/discourse organization (2.49 out of 3); Command of standard written English
(2.37); Using background knowledge, reference materials, and other resources to analyze and
refine arguments (2.35). The employment of appropriate vocabulary received a ranking of 2.27.
Table 2.2 Graduate Faculty Assessments of Some Writing Tasks
Task Statement
Mean Importance
Rating
Organize writing to convey major and supporting ideas.
4.46
Use relevant reasons and examples to support a position.
4.34
Demonstrate a command of standard written English including grammar,
4.06
phrasing, effective sentence structure, spelling, and punctuation.
Demonstrate facility with a range of vocabulary appropriate to the topic
3.74
Show awareness of audience needs and write to a particular audience or
3.62
reader.
Note. Mean Importance Rating on a scale of 0 to 5.
81
The Rosenfeld, Leung, and Oltman (2001) study demonstrated unambiguously that L2 grammar
and vocabulary skills play a crucial role in student academic success (and obviously survival).
The teaching of academic discourse organization is crucially important in L2 writing instruction.
It would be no exaggeration to say that the teaching of L2 academic writing focuses
predominantly on the features of discourse organization. However, markedly few course books on
L2 writing for either students or teacher training address the importance of text features in L2
instruction. As mentioned earlier, however organized the information flow can be in student
writing, it may be impossible to understand without an essential clarity and accuracy of text
(Hinkel, 2004, p.19).
2.12.4 Most Common Student Written Academic Assignments and Tasks
Though a large number of textbooks and course books are available that focus on writing skills,
there would be no one course book on EFL academic writing skills for either students or teacher
training that address „what academic writing is‟ and „written academic assignments and tasks‟ that
clearly focus on the purpose of the course (Academic Writing Skills Course). The most
comprehensive study of academic writing tasks was carried out by the Educational Testing
Service (Hale et al., 1996), which surveyed eight large comprehensive universities in the United
States. The information discussed in this investigation is summarized by indicating the most
common and important types of student written academic tasks and assignments with their
specified length of pages. These include:
2.12.4.1 Major Writing Assignments
Major academic essays typically have a specified length of 5 to 10 or more than 10 pages. These
papers predominantly take the forms of out-of-class assignments and are required far more
frequently in humanities courses such as psychology, economics, history, and English than in the
sciences, engineering, or computer science. Most of these projects also necessitate library
research and syntheses of literature and relevant information from a variety of sources. According
to Hale et al. (1996) findings, undergraduate courses in the sciences and engineering rarely expect
students to write papers as long as 5 to 10 pages, and most of these types of essays are expected in
English department courses.
82
2.12.4.2 Medium-Length Essays and Short Written Tasks
Medium-length essays between 1 and 5 pages are required as in-class and out-of-class
assignments in practically all disciplines, with the exceptions of undergraduate courses in physics,
mathematics, and engineering. In social sciences and humanities studies, they are expected in a
majority of undergraduate courses. Similarly, short written assignments of about 0.5 to 1.5 pages
represent course components in approximately half of all undergraduate courses, including
physics, math, and engineering, and 94% of English courses (Hale et al., 1996). These essays are
assigned both in and out of class in undergraduate and graduate studies alike. Among these
assignments, library research reports, laboratory or experiment reports with or without
interpretations, and book reviews represent the most common types of writing.
Short writing tasks (also called expanded answers) found in many written in-class and out-ofclass exams, laboratory reports, case studies, annotations of literature, and computer program
documentation assignments constitute the most common type of writing across all disciplines and
courses. Furthermore, short writing assignments are significantly more common in undergraduate
than graduate courses and in in-class than out-of-class assignments.
2.12.4.3 English Composition Writing Tasks
English composition instruction often provides the model for teaching writing in EAPs.
According to the Hale et al. (1996) study, short writing tasks are far less common in English than
in social or natural sciences (29% of all in-class assignments vs. 53% and 79%, respectively). On
the other hand, out-of-class essays are required in 94% of all English courses, for example,
compared with 53% in social and 47% in natural sciences.
Among the assignment types,
summaries and unstructured writing genre defined as free writing, journal entries, or notes, all
which consist of writing down one‟s thoughts and experiences, are found almost exclusively in
English courses, as well as twice as many assigned library research papers as in other disciplines.
Major papers of 5 to 10 pages in length are assigned in 41% of English courses and only rarely in
social science courses. Similarly, 1-to 5-page essays are required in 82% of English courses
versus 39% of those in social sciences and 21% in physical/natural sciences.
83
2.12.5 Essential Features of Academic Text and Importance of
Teaching Them
2.12.5.1 Features of Academic Genre and Text
Research into various types of discourse and text (Biber, 1988; Biber, Johansson, Leach, Conrad
& Finegan, 1999; Swales, 1990a) showed explicitly and clearly that academic discourse and text
are constructed differently than other types of text, such as fiction, news reportage, or personal
correspondence. In fact, Swales (1990a) identified what he called “the academic discourse
community” (p.5), which prescribes somewhat rigid forms of discourse construction and
organization combined with similarly inflexible expectations of vocabulary and grammar uses.
Biber et al. (1999) examined a large corpus of specific micro-features of texts in diverse spoken
and written genres, and their findings are described in a 1,200-page volume. Their analysis
includes the cross-genre uses of nouns, determiners, pronouns, verb tenses, and semantic classes
of verbs, adjectives, adverbs, and clauses. Textual uses of practically all features indicate that
written academic text is markedly distinct from many other types of texts, such as personal
narrative, conversation, and even academic lectures (Hinkel, 2004).
Other corpus studies investigated frequencies of use of various lexical and syntactic features
employed in academic and other types of text to elucidate the differences between the academic
and other types of written genres. For example, these examinations focus on various hedges,
modal verbs, epistemic adjectives, adverbs, and nouns(Collins, 1991; Hoye, 1997; Hyland, 1998),
as well as classes of collocations, idioms, synonyms, adverb clauses, and text-referential
cohesion(Partington, 1996). These studies expanded the current knowledge base regarding
specific structures and lexical features of written academic text, as well as other common features
of text such as noun and prepositional phrases, stock phrases, and collocations (Kennedy, 1991;
Kjellmer, 1991; Renouf & Sinclair, 1991).
For instance, analyses of large corpora have led to the development of pattern grammar to identify
combinations of words that occur relatively frequently in academic texts and that may be
dependent on a particular word choice to convey clear and definable meaning(Hunston &
Frances, 2000). Because of great strides made in corpus analyses in the past few decades, today
much more is known about frequent patterns of verb, noun, and adjective uses and variations in
84
their meanings, as well as the syntactic and lexical contexts in which particular lexical and
syntactic features occur.
Although findings of text and corpus analyses of the written academic genre may not be directly
applicable to classroom instruction and studies of student texts, they provide insight into
discourse and text conventions of published academic and other types of texts. Furthermore, they
often help explain how written academic prose is constructed and, by implication, can inform
writing instruction and pedagogy. An additional benefit of corpus studies is that they shed light on
how enormously complex and frequently lexicalized the uses of language and text in the
academic genre actually are.
2.12.5.2 Teaching Academic Text Features
In accordance with teaching academic text features, several researchers have identified English
composition essays and the pedagogical essays (Johns, 1997) ubiquitous (existing or being
everywhere) in English for Academic Purposes (EAPs) programs to be dramatically different
from those students are required to write in the disciplines. Among other researchers, Horowitz
(1986a) identified some of the writing tasks in undergraduate humanities courses. According to
his findings, these include:

Summary/reaction to journal article or reading

Annotated bibliography in such disciplines as biology, lab, and experiment reports

Connections between theory and data

Synthesis of multiple literature sources

Various research projects
Horowitz further noted that these assignments do not include invention and personal discovery
and “the academic writer‟s task is not to create personal meaning but to find, organize, and
present data according to fairly explicit instructions”(p.452). According to the author, sentencelevel grammar, use of discourse markers, and clarity of academic text remain “vital” (454) in the
teaching of academically bound NNS students.
In the 1980s, several studies endeavored to learn about the reactions of faculty to particular
features of NNS students‟ text (Johns, 1981; Ostler, 1980; Santos, 1988; Vann, Lorenz, and
85
Meyer, 1991; Vann, Meyer, and Lorenz, 1984). Most professors in the disciplines are not well
versed in the complexities of ESL instructions or L2 learning and acquisition. Nonetheless, their
perceptions of text quality are important because they are the ones who grade students‟
assignments. According to the results of these studies, the employment of syntactic, lexical, and
discourse features of text and errors in the uses of these features have an influential effect on the
perceived quality of students‟ text. Although sentence-and phrase-level errors are often seen in
relative rather than absolute terms, the problems in students‟ uses of verb tenses, word order,
subordinate clauses, passive voice, and impersonal constructions have been found to obscure the
text‟s meaning. In the view of faculty in various disciplines, such as physical and natural sciences,
humanities, business, and the arts, accuracy in the uses of these and other syntactic and lexical
features is very important and, in most cases, syntactic and lexical errors result in lower
assignment grades.
When thinking about the importance of accuracy in the academic writing of NNS students, many
ESL and EAP teachers believe that syntactic and lexical errors in L2 texts are not particularly
damaging because NS writers also make numerous mistakes in their texts. However, several
studies have found that faculty in the disciplines have a far more critical view ESL errors than
those of NSs (Santos, 1988; Vann et al., 1984, 1991). Although the indications of error gravity
vary across disciplines and even vary according to the age of faculty, the conclusions in all
investigations largely remain similar: ESL errors in students‟ texts are costly in terms of grades
and overall evaluations of work quality.
To determine whether the needs of academically bound NNS learners were adequately addressed
in EAP writing instruction, Leki and Carson (1997) interviewed a large group of students who
began their ESL training and then continued their studies in various disciplines, such as
engineering, biology, business, communications, and social work. The students reported great
differences between the demands of writing in EAP classes and those in the disciplines. Among
other important considerations, many students identified shortfalls in their vocabulary repertoire
and a lack of familiarity with the dry academic textual style. Most important, the students spoke
about the fact that EAP writing instruction represents what Leki and Carson called “non-textresponsible writing” (p.63), where as in the disciplines students are held accountable for the
context of the text they read and the content and accuracy of the text they produce. The authors
86
concluded that what is valued in writing classes that emphasize personal growth and experience is
distinct from that necessary in academic writing in the disciplines. They further stated that EAP
writing instruction has the responsibility for preparing students for “real” academic courses
because without adequate exposure to the demands of academic writing students are essentially
left to their own devices once their EAP training is completed.
Johns (1997) explained that the narrow focus of writing instruction in EAPs and its focus on
experiential essays is often based on the principal that, “if you can write (or read) an essay, you
can write (or read) anything” (p.122). She pointed out that in mainstream courses the expectations
and grading of writing are different from those of ESL/EAP faculty. In fact she commented that
when NNS students are exposed to largely one type of writing task, they come to believe that
“this is the only way to write,” such limited experience with writing actually does students a
disservice and causes problems in their academic and professional careers.
Like Horowitz, johns emphasized the importance of text in students‟ academic writing. She
emphasized that faculty often complain that students do not use vocabulary and data with care.
However, in her view, because personal essays are highly valued in ESL and EAP writing
instruction and because many instructional readings are in story form and/or simplified
specifically for NNS readers, students are not exposed to the precision often expected in much of
the academic prose. Furthermore, considerations of academic objectives often conveyed by
lexical and syntactical means, such as uses of personal pronouns and passive voice, are in conflict
with those features of text encouraged in personal essays. Johns emphasized that formal academic
register of academic text is rarely addressed in ESL/EAP writing instruction, but should be if
students are to attain the proficiency necessary for their success in mainstream academic courses.
In other studies, Dudly-Evans and St. John(1998) also stated that “the process approach(to
teaching L2 writing), although extremely valuable in helping students organize and plan their
writing has failed to tackle the actual texts that students have to produce as part of their academic
or professional work” (p.117). They also noted that in the United States, most of those who
advocate a process approach see the teaching of generalized strategies of planning, writing, and
revising as sufficient and believe that a detailed analysis of academic texts lies beyond the job of
the writing teacher (Raimes, 1993; Zamel, 1983). However, according to Dudley-Evans and
87
St..John, the considerations of end-product quality in L2 writing is important in academic and
professional writing, and combining the strengths of both the product-and process-oriented
approaches to the teaching of writing can lead to overall improvements in L2 writing instruction.
2.12.6 Types of Academic Writing Tasks in Commonly Required Academic
Courses
The discussion of writing tasks in this section relies on the findings of Hale et al. (1996) to survey
the writing requirements in eight comprehensive U.S. universities. Over all the types of writing
expected of undergraduate and graduate studies do not seem to vary greatly with regard to the
rhetorical and discourse patterns they elicit. Most assignments combine several rhetorical tasks
(e.g., exposition and analysis in business case studies or history essays).
2.12.6.1 Most common Types of Academic Writing Tasks
The most common types of rhetorical formats found in in-class and out-of-class assignments
represent (in declining order of frequency).

Exposition (short tasks required largely in introduction and explanation of material
or content to follow, and thus it is a component of all assignment types).

Cause-effect interpretation(by far the most prevalent writing task, found in over
half of all writing assignments)

Classification of events, facts, and developments according to a generalized
theoretical or factual scheme.

Comparison/Contrast of entities, theories, methods, analyses, and approaches (in
short assignments)

Analysis of information/facts (in medium-length assignments)

Argumentation based on facts/research/published literature(in medium-length
assignments)
88
2.12.6.2 Less Common Writing Tasks include:
 Expanded definition (list common in medium-length and out-of-class assignments)
 Process analysis in such disciplines as political science, economics, sociology,
psychology, accounting, marketing, and management (hardly ever found in out-of-class
assignments)
 Fact-based exemplification of concepts and theoretical premises and constructs (overall
least common in both in-class and out-of-class assignments)
 Not found in any assignments - Narration/Description in the disciplines or English
courses.
In general, the frequency of rhetorical patterns does not seem to differ greatly among the writing
tasks in undergraduate and graduate courses. Specifically, cause-effect essays can be found in
over half of all written tasks in in-and-out-of-class assignments, with exemplification, process
analysis, and definition being comparatively least common.
1. Exposition rhetorical tasks require writers to explain or clarify the topic/subject. In
general terms, exposition is entailed in expressing ideas, opinions, or explanation
pertaining to a particular piece of knowledge or fact.
2. Cause-effect interpretation tasks deal with establishing causal reasoning. Most
assignments of this type include a discussion or an explanation of a cause-effect
relationship among events or problems, identification of causes or effects, and a
presentation of problem solution in the case of problem-solution tasks
3. Classification of events, facts, and developments assignments involve cognitive tasks in
which writers are expected to determine what types of group members share particular
features or characteristics. Therefore, students are required to classify clusters or groups
of objects, events, or situations according to their common attributes, create a system to
classify objects or events, and list them based on this classification.
4. Comparison/Contrast tasks expect writers to discuss or examine objects or domains of
knowledge by identifying their characteristics/properties that make them similar or
different. In general, the purpose of such assignments is to identify the specific points that
make objects, events, or situations similar and/or different as well as explain one in terms
of the other.
89
5. Analysis of information or facts (in medium-length assignments) requires writers to
separate a whole into elements or component parts and identify relationships among these
parts. Other types of analysis assignments include applying theories or interpretive
methods to the object of analysis or a particular school of thought, distinguishing facts
from theories, evaluating the validity of stated or unstated assumptions and/or various
types of relationships among events, identifying logical fallacies in arguments, or
specifying the author‟s purpose, bias, or point of view.
6. Argumentation assignments largely represent a form of exposition that includes an
element of persuasion. Therefore, the rhetorical purpose of these writing tasks extends
beyond the presentation, explanation, or discussion to convince the reader of a particular
point of view. In argumentation tasks, the writers are required to recognize that issues
have at least two sides and present the facts or information to develop a reasoned and
logical conclusion based on the presented evidence. In practically all assignments,
presentations of unsupported assertions are not considered to be argumentation (Hale et
al., 1996).
2.12.6.3 Less Common Rhetorical and Writing Tasks
Three types of writing tasks appear markedly less common than those discussed earlier:
definition, process description, and exemplification.
1. Expanded definition assignments consist of explanations of exact meanings or
significance of a phrase or term. Usually these assignments consist of defining the term,
listing the concept to which the term belongs, and specifying the attributes that
distinguish it from others in its class.
2. Process Analysis involves directions on how someone should do something or how
something should be done, including chronological details in a series of
steps/operations/actions necessary to achieve a particular result or happening. In most
cases, a discussion of reasons for the steps and negative directions are needed.
3. Exemplification and Illustration largely deals with expanding on theories/ concepts/ideas
and providing reasonable amounts of detail to explain a type, class, or group of objects,
or events by presenting examples. These assignments largely rely on general-to-specific
discourse organization flow (Hinkel, 2004, in Hale et al., 1996)
90
2.13. Critical Thinking and L2/Foreign Language Learning
Following the recognition of the centrality of critical thinking in general education, foreign and
second language learners, teachers, and researchers are now increasingly coming to grasp the
concept and sketch its application in language learning and teaching. For one, Birjandi and
Bagherkazemi (2010) found out a high positive correlation between Iranian EFL teachers‟ critical
thinking ability and their student-evaluated professional success. They state that the general trend
toward communicative language teaching with its emphasis on the process rather than the product
of learning well embodies a greater concern with critical thinking, and believe in order to develop
in learners the ability to think critically teachers themselves must possess this dispositional and
cognitive capacity.
But the question is: Is critical thinking teachable in an L2 classroom? Atkinson (1997) has his
doubts, offering four reasons:
1. Critical thinking is a kind of social practice, with no easily definable pedagogical set of
behaviors;
2. Critical thinking is exclusive and reductive in nature.
3. Critical thinking may not be valued by some non-native cultures.
4. Critical thinking skills are not transferable beyond their context of instruction.
Davidson(1998) takes a critical stance against Atkinson, using Seigle‟s (1989) terminology, ”selfreflective justificatory strategy”, meaning that even to make a case against critical thinking, one
has to presuppose its validity, that is, to be a critical thinker. Insofar as the cultural bias of critical
thinking is concerned, he maintains: Part of the English teacher‟s task is to prepare learners to
interact with native speakers who value explicit comment, intelligent criticism, and intellectual
assertion. Maybe even more than the L1 teacher, we as L2 teachers have good reason to introduce
higher level students to aspects of critical thinking. If we do not, our students may well flounder
when they are confronted with necessity of thinking critically, especially in an academic setting
(p.121).
There are also some studies that confirm that critical thinking is teachable in a language learning
environment. For one, in a pilot study using a critical thinking essay test, a treatment group of
Japanese college students receiving supplemental instruction in critical thinking skills
91
significantly outperformed a control group receiving only content-based, intensive academic
English instruction (Davidson and Dunham, 1997).
Oster (1989) points out the significance of critical thinking in American and European
universities, and believes that a language pedagogy should help learners develop this ability. To
the researcher‟s best knowledge, such questions (whether or not critical thinking is teachable in an
EFL context) have not been empirically tested/approached by EFL researchers in Ethiopia yet. In
an attempt to initiate research on such issues, the present study was undertaken to determine to
what extent explicit instruction in critical thinking help students in a foreign language context to
successfully perform on their EFL pedagogy.
2.13.1 Critical Thinking and Academic EFL/ESL Writing
Writing is probably the most important skill for second language learners in academic contexts.
Writing activities are the best ways to teach critical thinking. Because writing is an activity which
forces students to organize their thoughts, think deeply about their topic and present their
conclusions in a persuasive manner. Goatly (2000) states that one reason that we might expect
writing to improve critical thinking is the existence of some sort of writing such as persuasive or
argumentative writing which have been difficult for the students.
A number of philosophers, psychologists, language educators, writers, and teachers of
writing(Chaffee, McMahon and Stout, 2002; Sims, 2009; Norris and Ennis, 1989; Glaser, 1941;
Paul, Fisher and Nosich, 1993 ; Maimon, peritz and Yancey, 2008; Forster and Steadman, 1952;
Sumner,1906 ; Fisher and Scriven, 1997 ; Raimes, 1983 ), for instance, have long recognized the
intricate relationships between the extraordinary human process of thought and language, and
emphasized the strong relationship between critical thinking and writing skills. They argue that an
integrated approach to teaching the thinking and writing skills does have a huge importance that
first-year composition students need for success in academic work. This means that integrating
critical thinking with writing instruction is one useful strategy of presenting and practicing
language and it helps students to develop higher-order thinking abilities and learn to articulate
their ideas through writing. Paul and Elder (2009a) advocated that a critical thinking curriculum
rich with reading and writing strategies is important in English language classrooms (p. 287).
92
Theoretical assumptions made by a number of scholars in the field indicate that writing is thought
to contribute to the development of critical thinking skills (Kurfiss, 1988). Canagarajah (2002)
articulates that critical thinking brings into sharper focus matters that are always there in writing.
It develops an attitude and a perspective that enable us to see some of the hidden components of
text construction and the subtler ramification of writing. Writing has been widely used as a tool
for communicating ideas, but less is known about how writing can improve the thinking process
(Rivard, 1994 and Klein, 2004). Champagne (1999); Kelly (1999), and Hand (2002) comment
that writing is thought to be a vehicle for improving student learning. But too often it is used as a
means to rehearse content knowledge and drive prescribed outcomes (Keys, 1999). Applebee
(1984) suggests that writing improves thinking because it requires an individual to make his or
her ideas explicit, and to evaluate and choose among tools necessary for effective discourse.
Resnick(1987) believes that writing should provide an opportunity to think with arguments, which
could serve as a “cultivator and an enabler of higher order thinking.” Marzano(1991) suggests that
writing is a means to restructure knowledge that improves higher-order thinking. In this context,
writing may provide opportunity for students to think through arguments and use higher-order
thinking skills to respond to complex problems. Writing has also been used as a strategy to
improve conceptual learning (Aplebee, 1987 and Ackerman, 1993). Subsequent work has focused
on how writing within disciplines helping students to learn content and how to think. Specifically,
writing within disciplines is thought to require deeper analytical thinking (Langer et al, 1987),
which is closely aligned with critical thinking. The influence of critical thinking on writing skills
and/or vise versa is less defined in English Language Teaching (ELT). Researchers have
repeatedly called for more investigations about the influence of one over the other in English for
promoting critical thinking and writing skills. This research, therefore, is an attempt to address
and find answer to this issue.
In fact, Michael (1998) proposes a cycle of engagement and reflection that forms the cognitive
engine of writing. An engaged writer devotes full mental resources to transforming a chain of
associated ideas into written text. The cycle of critical thinking proposes that the writer should
bring the current state of the task into conscious attention, as a mental representation to be
explored and transformed.
93
Perhaps the most relevant study to address the issue of improving critical thinking in English
language classes was done by pullen (1992). It was reported that the English department was
successful in fostering greater critical thinking skills, reflected by improving the students‟ test
scores. Paul (1997) also argues that there are two essential dimensions of thinking that students
need to master in order to learn how to upgrade their thinking: (a) they need to be able to identify
the parts of their thinking, and (b) they need to be able to assess their thinking. Paul refers to the
parts as the elements of reasoning (see Appendix A), which he assessed through the standards of
reasoning (see Appendix A).
2.13.2 The Relationship of Critical Thinking to Creative Thinking and
Thoughtful Writing
Not many years ago, „Thinking‟ was assumed to be an activity of the mind with little or no
relation to language (Forster and Steadman, 1952). According to this theory, people could be
successful thinkers or could improve their thinking by training without being able to „express‟
their ideas to other people in words. Language was assumed to be as different from thought as a
freight car is different from the goods it carries. Today, however, psychologists are more inclined
to believe thinking (at least the kind of thinking we are concerned within a college) is silent
talking, that is, deliberate thinking is making up meaningful sentences in a language. According to
this latter theory, a person cannot have a useful thought that he cannot state in words (P.3 - 4).
Chaffee, McMahon and Stout (2002:17) have indicated the reciprocal relationships among writing
thoughtfully, thinking creatively, and thinking critically. According to them, when we first decide
to write something, we need to come up with some initial ideas to write about. Our ability to think
creatively makes producing such ideas possible. When we think creatively, we discover ideas –
and connections among ideas – that are illuminating, useful, often exciting, sometimes original,
and usually worth developing. They, therefore, define thinking creatively as discovering and
developing ideas that are unusual and worthy of further elaboration.
Simultaneously (or almost simultaneously), these beginning ideas find form in language
expressed in writing. Yet the process of writing thoughtfully elaborates and shapes the ideas that
we are trying to express, especially if we are to bring our critical thinking abilities to bear on this
evolving process. This extraordinarily complex process typically takes place in a very natural
94
fashion, as creative thinking and critical thinking work together to produce thoughtful writing
which in turn gives form to our ideas and communicates them to others.
According to Chaffee et al. (2002), effective writers not only use each of these processes but also
are able to integrate them. For example, it is impossible to write thoughtfully without creating
ideas that reflect our vision of the world or without using our critical thinking abilities to evaluate
the accuracy and intelligibility of our writing (p.4).
Writing, with its power to represent our thoughts, feelings, and experiences symbolically, is the
most important tool our thinking process has (Chaffee, McMahon and Stout, 2002). “Writing and
thought,” it is argued by these educators, are intimately related. Used together, thinking and
writing enable us to communicate meaning (p.4). Zinsser (1983) also discusses the need to get
words out of our heads and on to paper. According to him, the actual writing of words on a page
is only part of a more extended process of thinking, collecting information, and reviewing that
goes into manuscript production.
Thinking critically, that is by carefully exploring your thinking process is one of the most
satisfying aspects of being a mature, educated human being. Analogously, writing thoughtfully
involves thinking critically as you move through the process of writing so that you can express
your ideas effectively (Chaffee, et al. 2002:5).
According to Vygotskey (1987:203), when children enter school they have already acquired a
great deal of procedural knowledge of the grammar and lexicon of their own (native) language;
however, in most cases they are unaware of this knowledge and have difficulty in making it
explicit. In learning to write (what vygotskey called „written speech‟), however, children are
forced to create the situation or – more accurately - to represent it in thought. Therefore, writing
„presupposes a fundamentally different relationship to the situation, one that is freer, more
independent, and more voluntary‟ (Ibid). In speech, children are not conscious of how they
pronounce each segment of an utterance. In learning to write they become aware of each of the
relevant segments, individual sounds, the strings of letters that represent words, the strings of
words that represent phrases and full sentences - and, above all, the meanings that are encoded
through these various strings. Importantly, all of this is produced voluntarily and consciously.
95
Moreover, it is argued by Vygotskey (1987) that the relationship between inner speech and
writing is different from the relationship between inner speech and oral language. For one thing,
oral speech in childhood precedes the development of inner speech, while writing presupposes the
existence of inner speech. Once inner speech is fully formed, because of the nature of face-to-face
communication, the transition from inner to oral speech does not require maximal syntactic
specificity to be understood by an interlocutor. The circumstances in which the interaction
unfolds is used by the speaker and interlocutor to flesh out intended meanings. Written language
is another matter. In writing, children must learn to externalize their inner speech in a maximally
syntactic way in order to be understood by a (potential) interlocutor usually displaced in time and
space.
Furthermore, as a better reader and thinker, and a clearer and more persuasive writer, one should
be able to meet the demands of different academic writing situations, that is, the critical thinking
dimensions of writing. These, according to Ramage, Bean and Johnson ( 2003 ) are learning how
to: (1) Pose a significant question about a topic, (2) Deepen your thinking about a question
through exploratory writing and talking, (3) Create an effective thesis aimed at changing your
readers‟ view of your topic, (4) Support your thesis with points and particulars, (5) Imagine
alternative positions and viewpoints, (6) Examine underlying assumptions and values, (7)
Analyze, synthesize, and evaluate ideas, (8) Integrate your own ideas with those of others, and (9)
Summarize a writer‟s argument and speak back to it both with and against the grain.
2.13.3 Teaching Critical Thinking in Academic EFL Writing Classes
Along with correlational and descriptive research mentioned above, experimental and case study
research shows that critical thinking is most effectively taught when critical thinking and
composition are taught in an integrated manner (Hatcher, 1999; Tsui, 2002; Hatcher, 2006;
Quitadamo & Kurtz, 2007). Quitadamo and Kurtz (2007) examined this specific question
experimentally in a General Education Biology class at a university about twice the size of
USMA. They showed that students who completed written assessments had significantly higher
gains in critical thinking than those who completed only multiple choice assessments.
96
According to the data from published assessment research and their own end of course survey
data, Quitadamo and Kurtz (2oo7) suggested that we should tackle the challenge of enhancing
critical thinking using some form of writing. The challenge that remained was that it was not at all
clear how we could implement critical thinking through writing in a coherent fashion. As
previously mentioned and even in Robert Ennis‟s “streamlined conception of critical thinking”
(1991), critical thinking is a complex skill, or rather set of skills. Therefore, any writing approach
to enhance critical thinking certainly could not be accomplished by adding only one or two new
writing assignments to the course. In short, tackling this issue would require a new overarching
and integrated pedagogical approach that was specifically focused on analytical writing.
The actual teaching of critical thinking as pointed out by Ennis, 2011; Sims, 2009; Buchanan,
2003; Chaffee, McMahon and Stout, 2002; Paul 1985; 1990 is a function of many situation
specific factors, strategies and activities. Critical thinking strategies encourage teachers to actively
teach students how to think rather than simply provide them with content knowledge alone.
Critical thinking strategies also support the differing learning style within a classroom. Students
learn and excel when provided with multiple, varied opportunities. A classroom that offers an
array of learning experiences increases the likely hood of success for more students (Gardner,
1983; Dunn and Dunn, 1978).
Studies involving multi-sensory teaching experiences show that students achieve more gains in
learning than when taught with a single approach, whether it is a visual or an auditory approach
(Farkas, 2003; Maal, 2004). Multi-sensory instruction or a combination of approaches appears to
create the optimal learning setting, even for students with disabilities (Clark and Uhry, 1995). The
variety in formats for students to demonstrate their learning has the potential to improve student
interest, increase students‟ interaction, and extend classroom learning.
Critical thinking is a complex activity and we should not expect one method of instruction to
prove sufficient for developing each of its component parts (Carr, 1990). Mendelman(2007)
stated that a strong critical thinking program should be designed to gradually progress from the
basic to the complex. Teachers need to scaffold specific thinking strategy instruction, beginning
with basic questioning strategies, then build to develop the ability to inference, as well as
analyzing, synthesizing, and evaluating skills. “To ensure development of critical thinking
97
strategies, implementation of instructional activities that provide an opportunity for discussion
related to topics, concepts, and intellectual skills are necessary”(Hayes and Devitt, 2008, p. 66).
Bruning et al (2004) pointed out that the most effective educators teach critical thinking skills in a
sequential, orderly fashion (p.187). In the light of this, I here suggest some general strategies,
activities, and visuals gleaned from years of experience of critical thinking teachers, books, and
research articles on the area by teaching general critical thinking skills and principles as a standalone component explicitly and directly(see Chapter I) and then integrating critical thinking into a
regular academic writing skills content. Because the mixed approach (as was reviewed in the
literature) of critical thinking instruction combines elements of both the general and subject
specific approach. This approach pair stand-alone instruction in general critical thinking
principles with application of critical thinking skills in the content of specific subject matter.
Thus, higher order critical thinking questioning; cooperative learning in small group; self-andpeer review with critical thinking checklists and critique forms; critical reading; writing
portfolios, and visuals (including Paul‟s model for Critical Thinking, and Numrich‟s model of
Critical Thinking Tasks). As Sims (2009) points out, these strategies are the most successful ways
to engage students‟ critical thinking skills and get them to take responsibility for their learning to
write academic essays/papers.
2.13.3.1. Techniques in Using Critical Thinking Questioning
A key strategy for promoting thinking is the use of questions. Questions form a core element of
all verbal interactions (Sullivan and Clarke, 1991, cited in Ayaduray and Jacobs, 1997). Asking
questions according to Wade (1995), is one of the most characteristics of critical thinking
involved in learning. Questioning is the very cornerstone of philosophy and education ever since
Socrates (469 - 399 BC). It has been central to our development of thinking and our capacity to
learn. Questions enable students to think through the content they are expected to be learning and
lead to understanding. If we, as educators, want students to think, we must stimulate and cultivate
thinking with questions (Paul, 1990). According to Paul, by engaging students in a variety of
questioning that relates to the idea or content being studied, students develop and apply their
critical thinking skills. Consequently, by using the analysis, synthesis, and evaluation levels (that
is, Bloom‟s critical thinking questioning strategies - a guide to higher level thinking), students are
challenged to work at tasks that are more demanding and thought-provoking.
98
Paul (1985) points out that “thinking is not driven by answers but by questions”. According to
Paul, the driving forces in the thinking process are the questions. When a student needs to think
through an idea or issue or to rethink anything, questions must be asked to stimulate thought.
When answers are given, sometimes thinking stops completely. When an answer generates
another question, then thought continues.
Studies, for example, Paul et al., 1993; Ayaduray and Jacobs (1997); Seime, 1999 (from local
studies), suggest that teachers need to ask questions and design learning experiences to turn on
students‟ intellectual thinking engines. Students can generate questions from teachers‟ questions
to get their thinking to move forward. Thinking is of no use unless it goes somewhere, and again,
the questions asked or the activities selected to engage students in learning determine the
direction of their thinking. While students are learning, the teacher could ask questions to draw
meaning from the content. The higher order stems contained in the critical thinking strategies
(analysis, synthesis, and evaluation) drive students‟ thinking to a deeper level and lead students to
deal with complexity, rather than just search through text to find an answer.
Research, also shows that instruction involving questioning is more effective than instruction
without questioning. The key is that the teacher who fosters critical thinking fosters reflectiveness
in students by asking questions that stimulate thinking essential to the construction of knowledge.
Questioning, according to this research, is one of the research based strategies presented in
classroom instruction that works (Marzano, Pickering, and Pollock, 2001).
Types of questions
Bloom (1956) is well known for his work in differentiating questions according to the type of
cognitive activity they stimulate. According to Bloom‟s critical thinking questioning strategies,
lower order questions generate more superficial thought, e.g., recall of information, while higher
order questions (the focus of the current study) are those which stimulate learners to think more
deeply, e.g., application, analysis, synthesis, or evaluation of information. Because higher order
questions require students to apply, analyze, synthesize, and evaluate information instead of
simply recalling facts (Ayaduray and Jacobs, 1997). Such deep thinking, as Allwright (1979)
notes is also in line with the communicative approach to L2 learning which highlights the
importance of meaningful interaction, rather than rote repetition or study of language as object.
99
Recent studies on Brain-based teaching and learning language have also examined the
relationships between thinking and higher order questions. Christison (2002), for example,
indicates that instead of asking only questions that require statements of fact or yes/no answers,
teachers should ask more thought-provoking questions. Redfield and Rousseau (1981), in their
meta-analysis of research on teacher questioning behavior reported that the better the quality of
questions asked, the more the brain is challenged to think and perform (p.6).
Educators have also traditionally classified questions according to Bloom‟s Taxonomy, a
hierarchy of increasingly complex intellectual skills. Bloom‟s Taxonomy includes six categories:
a) Knowledge (level 1): recall of data or information (remembering of previously learned
material, recalling facts, terms, basic concept from stated text). b) Comprehension (level 2):
Understand (demonstrating understanding of the stated meaning of facts and ideas); Infer
(demonstrating understanding of the unstated meaning of facts and ideas). c) Application (level
3): use a concept in a new situation (solving problems by applying acquired knowledge, facts and
techniques in a different situation). d) Analysis (level 4): separate concepts into parts; distinguish
between facts and inferences; examining and breaking down information into parts. e) Synthesis
(level 5): combine parts to form new meaning; compiling information in a different way by
combing elements in a new pattern. f) Evaluation (level 6): make judgments about the value of
ideas or products; presenting and defending opinions by making judgments about information
based on criteria.
Again, some researchers, according to Marzano, Pickering, and Pollock (2001), have simplified
classification of questions into lower and higher cognitive questions. Lower cognitive questions
(facts, closed, direct, recall, and knowledge questions) involve the recall of information. Higher
cognitive questions (open–ended, interpretive, evaluative, inquiry, inferential, and synthesize
questions) involve the mental manipulation of information to produce or support an answer. As
Walker (2003) explains it, higher level thinking questions: (1) are open-ended questions aimed at
provoking divergent thinking; (2) go beyond knowledge-level recall; (3) should promote
evaluation and synthesis of facts and concepts; and (4) should start or end with words and phrases
such as “explain”, “compare”, and “why”.
100
Moreover, Meyer and Smith (1987) argue that lower order questions are usually “What”
questions. They typically test the knowledge students have about definitions or meanings. Higher
order questions tend to be “Why” and “How” questions which encourage students to think more
deeply about a concept or the reasons for an answer. As for Meyer and Smith, teachers should
include both types of questions, with an emphasis on higher order questions which challenge
students and make them think. Further, regardless of the classification of questions, traditional
wisdom holds that the higher cognitive questions lead to higher-quality answers and increased
learning and achievement. However, the research has mixed conclusions in this area. Some
studies found that higher level questions did indeed produce deeper learning, while others found
that not to be the case.
Furthermore, according to some studies (marzano, pickering and Pollock, 2001; Cotton 1989),
lower cognitive questions (i.e., knowledge and comprehension questions on Bloom‟s Taxonomy)
may be most beneficial for primary students. Lower cognitive questions are also more effective
when the goal is to impart factual knowledge and commit it to memory. Higher cognitive
questions (i.e., application, analysis, synthesis, and evaluation questions), however, should make
up a higher percentage of questions asked above the primary grades.
Studies, Marzano et al (2001) show that a combination of lower and higher questions is more
effective than the exclusive use of one or the other. However, increasing the use of higher
cognitive questions can produce superior learning gains for older students particularly those in
secondary and higher institution students, and does not reduce student performance on lower
cognitive questions. As for these studies, the use of a high frequency (50 percent or more) of
higher cognitive questions with older students is positively related to increase in an on-task
behavior, length of student responses, the number of relevant contributions, the number of student
to student interactions, student use of complete sentences, speculative thinking, and relevant
questions posed by students.
Previous studies, mostly in first language (L1) classrooms, have examined the relationship
between strategy instruction and student use of higher order questions. Davey and McBride
(1986) and king (1990) in the US, for example, found that strategy instruction was associated with
greater quantities of higher order questions. Alcon (1993) in Spain, with L2 learners, also found
101
that students who received instruction asked more higher order questions (cited in Ayaduray and
Jacobs, 1997, p.562) have been better benefited than those asked lower order questions.
Of course, questions usually lead to responses. These too, have been classified in many ways. Of
particular relevance to this study is the work of Webb (1989) who classified responses into two
types – elaborated and unelaborated – the difference being that an “elaborated” response provides
not just an answer to the question but also an explanation of some of the thinking behind the
answer. Webb (1989) reviewed studies in which elaborated responses were associated with
learning gains in content subjects for those L1 students who received the responses as well as for
those who provided them. In specific regard to L2 instruction, knowledge of the language needed
to provide elaborated responses, as well as to ask higher order questions, forms an important part
of learner knowledge of language functions (Coelho, 1992). King (1990) found that instruction
was associated with increases in both higher order questions, as mentioned above, and with
elaborated responses.
Studies, Ayaduray and Jacobs (1997), for instance, have also shown how to develop students‟
critical thinking questioning skills in group learning environment than the whole class discussion.
Much of these researches on questions and responses have taken place with students studying
together in groups. This fits with views of theorists such as Vygotsky (1978) and Bruner (1986)
who argued that knowledge is socially constructed. Student-student interaction is believed to
provide cognitive scaffolding (Palincsar and Brown, 1984) which enables students to support each
other‟s learning and to build on one another‟s knowledge.
Two studies of Hong Kong second language classrooms (Tsui, 1985; Wu, unpublished.., cited in
Tsui, 1996) found that in a teacher-fronted setting students asked no questions. Tsui(1996)
attributed this partly to the anxiety students feel in the whole class format and proposed group
activities as one means of lessening anxiety. Long and Porter (1985) suggested that groups
provide a less stressful environment for students to use their L2. This may encourage students to
ask more questions and to take more risks in providing elaborated responses. Further, Long and
Porter have argued that in groups students speak more and are able to use a greater range of
language functions, because they have more independence than in a teacher-fronted mode. This
greater range of functions would certainly include types of questions and responses. Indeed, being
102
able to pose appropriate questions and make appropriate responses to questions are collaborative
skills vital to successful group functioning (Jacobs and kline-Liu, 1996).
However, putting students in groups and asking them to work together may be insufficient to
generate the kind of language and learning desired (Johnson et al.,1993). A great deal of research
has been conducted into groups in education. As a result, a wide range of techniques have been
developed to encourage students to learn together effectively. These procedures include providing
students with scripts which suggest appropriate language to use, giving students rotating roles to
play in the group, and careful teacher monitoring of group interaction (Kagan,1994).
According to Mary Land State University Department of Education Publication Better Thinking
and Learning (1991), teachers who ask higher order questions provide learning because these
types of questions require students to apply, analyze, synthesize, and evaluate information instead
of simply recalling facts. Paul (1985, p.37) has also indicated that critical thinking is…”learning
how to ask and answer the questions of analysis, synthesis, and evaluations.”
Thus, specific and focused questions with more emphasis on higher order critical thinking
questioning of application, analysis, synthesis, and evaluation that require writing students to
apply, analyze, synthesize, and evaluate information instead of simply recalling facts will be used
as one of the most important critical thinking strategies to help guide the thinking-writing
processes (see Appendix B, for designed CT tasks accordingly). Critical thinking questioning
strategy is, therefore, the most obvious and widely used instrument for the treatment group of this
study. To make use of this strategy, the critical thinking terms of writing (e.g., purpose, ideas,
support, assumptions and biases, conclusions, point of view and analysis…see the course material
prepared for experimental group) has, therefore, been broken down and paired with some generic
question stems adapted from Alcon(1993), King(1990), Meyer and Smith (1987). These were
designed to provide students with language frameworks for asking higher order questions.
Examples of such question stems include, “Why…..?”, “What…..?” and “How….?” These sets
of generic question stems will only be used with the experimental groups to make sure that
students are also addressing these elements of reasoning as they write and read. Therefore,
students will be asked to extend course concepts and facts in new directions (i.e. application); to
103
break ideas apart and relate to other ideas (i.e. analysis); to create new organization of ideas (i.e.
synthesis); and to make well-reasoned judgments and decisions (i.e. evaluation).
Thus, giving students essays to write that ask them to interpret, analyze, synthesize, and evaluate
the material (Halstead and Tomson, 2006) is one of the most important teaching strategies used to
promote the critical thinking-writing process in this study. Questions that help students develop
their core cognitive skills are summarized in Table 2.3 below.
Table 2.3 Questions to Fire up Students’ Critical Thinking Skills
Core
Critical
Thinking Skills
Interpretation
Analysis
Inference
Evaluation
Explanation
Self-Regulation
Critical Thinking Questions
. What does this mean?
. What‟s happening?
. How should we understand that (e.g., what he or she just said)?
. What is the best way to characterize/categorize/classify this?
. In this context, what was intended by saying/doing that?
. How can we make sense out of this (experience, feeling, or statement)?
. Please tell us again your reasons for making this claim.
. What is your conclusion/what is it that you are claiming?
. Why do you think that?
. What are the arguments pro and con?
. What assumptions must we make to accept that conclusion?
. What is your basis for saying that?
. Given what we know so far, what conclusions can we draw?
. Given what we know so far, what can we rule out?
. What does this evidence imply?
. If we abandoned/accepted that assumption, how would things change?
. What additional information do we need to resolve this question?
. If we believed these things, what would they imply for us going forward?
. What are the consequences of doing things that way?
. What are some alternatives we haven‟t yet explored?
. Let‟s consider each option and see where it takes us.
. Are there any undesirable consequences that we can and should foresee?
. How credible is that claim?
. Why do we think we can trust what this person claims?
. How strong are those arguments?
. Do we have our facts right?
. How confident can we be in our conclusion, given what we now know?
. What were the specific findings/results of the investigation?
. Please tell us how you conducted that analysis.
. How did you come to that interpretation?
. Please take us through your reasoning one more time.
. Why do you think that (was the right answer/was the solution)?
. How would you explain why this particular decision was made?
. Our position on this issue is still too vague; can we be more precise?
. How good was our methodology, and how well did we follow it?
. Is there a way we can reconcile these two apparently conflicting conclusions?
104
. How good is our evidence?
. Ok, before we commit, what are we missing?
. I‟m finding some of our definitions a little confusing; can we revisit what we
mean by certain things before making any final decisions?
Source: 2014 User Manual for the California Critical Thinking Skills Test, published by Insight Assessment.
2.13.3.2 Techniques in using Critical Reading
Many of the assigned readings students will have in college require them to think critically and
analyze the ideas and arguments, the techniques, and the reasoning of the author. Critical reading
involves asking questions and trying to find answers to them. If students get into the habit of
asking critical thinking questions as they read a selection, they will be able to understand it better
and reach deeper analytical conclusions. Not only will they be able to assess the arguments an
author is making in a story or essay, they will also be able to recognize the writing techniques
used and the ideas the author is building arguments and conclusions upon. Students can also
check to see if the author makes any errors in the introduction or conclusion to the piece (Sims,
2009; Chaffee, McMahon and Stout, 2002; Buchanan, 2001).
Student and professional readings, with writing prompts, for analysis and interpretation, synthesis,
and evaluation by student writers are also techniques to be considered. Sample readings and
analysis of these readings can be helpful for the students to write effective paragraphs and essays.
The reading selections in the paragraph and essay modes chapters (see the course material for EG
students), according to (Sims, 2009, and Buchanan, 2003) reflect the thinking of student and
professional writers about current social issues and conditions, and help students to practice their
critical thinking and reading skills (i.e., analyzing and interpreting, evaluating, and
synthesizing the text) and provide the basis for student‟s writing. Therefore, students have to be
able to understand these readings, generate ideas in response to the readings, use those ideas to
formulate a thesis and supporting points, and use appropriate details from the readings as
evidence in their paragraphs and essays. All of the reading selections have exercises to help
student writers employ the skills mentioned and writing prompts that emphasize critical thinking
and encourage deeper understanding and analysis by students.
105
Thus, asking critical reading questions of analysis and interpretation, synthesis, and evaluation
(see the activities App. ), and trying to find answers to them help students do better in their critical
thinking essay writing classes.
2.13.3.3 Techniques in Using Critical Thinking Skills Tasks
Scriven and Paul (2009), defined critical thinking as “the intellectually disciplined process of
actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating
information gathered from, or generated by, observation, experience, reflection, reasoning, or
communication, as a guide to belief and action”(quoted in Foundation for Critical Thinking, 2009,
para. 2). Scriven and Paul put into one carefully crafted sentence these various processes that
constitute the core skills most educators would agree are involved in critical thinking. In any
academic situations and in their daily lives, students think - that is obvious. They are always
asked to do and to use some or all of these critical thinking skills. However, the question of how
English language teachers and students in EFL context include, approach, and practice these
important skills in their lessons is still remained unanswered. What may be lacking in an ongoing
discussion of critical thinking is a way of translating the skills associated with it into lessons and
classroom activities in a manageable, flexible, and usable framework or sequence - one that helps
teachers introduce and reinforce critical thinking skills in their lessons while also helping students
develop their English language proficiency(Beaumont, 2010). Still the larger challenge remains
with the EFL course materials and material designers in using these skills simultaneously and
explicitly (rather than the traditional implicit goal of a course content) – and not in any prescribed
order. Thus, the critical thinking tasks and their sequence outlined in this study, named for its
originator, Columbia University‟s Carol Numrich (in Beaumont, 2010), is such a framework to be
adapted and used for the purpose of this study.
According to Beaumont (2010), students need critical thinking in their academic life. They need
to solve word problems in math class, to conduct scientific inquiry in chemistry, to understand the
workings of the human mind in psychology, and to write paragraphs, essays, and term papers
across the curriculum. All of these assignments require them to think critically. Ultimately,
however, teachers want students not only to practice these skills in the classroom but to take them
out into the world and use them. As Beyer (1995) notes, what critical thinking truly entails is
106
developing the capacity to make reasoned judgments in one‟s academic work and in one‟s life
beyond the classroom.
The article by Beaumont (2010), presented a sequence of seven critical thinking tasks, a flexible
framework that acts as a practical tool for planning and developing level-appropriate classroom
materials that encourage and advance critical thinking. According to Beaumont, this sequence
of critical thinking tasks, originated by Carol Numrich at Columbia University, guides teachers
in scaffolding critical thinking and English language skills so that critical thinking may be
practiced at any language proficiency level.
In addition to serving as a practical planning tool for teachers, Beaumont (2010) further argued,
this framework also helps us teachers become keenly aware of what we ask our students to do.
For example,
 Are we asking students the same questions or question types over and over again
and, therefore, practice the same skills?
 Are we challenging them to think beyond the level of comprehension or personal
experience?
 Are we asking them to base their decisions on emotion or gut instinct, on personal
experience, on identifiable facts and reliable evidence, or on some or all of these?
The purpose of having such a sequence is not to prescribe what teachers and students do, but to
help us recognize teaching and learning options and to seize opportunities to point students in
directions they might not immediately see on their own. This arrangement will also help teachers
and students more clearly understand and identify the specific critical thinking skills they are
using. The task types in the Numrich sequence also present students with opportunities for
communication about real issues that are important to them. This, in turn, may be enjoyable,
engaging, and productive, which will help them work toward the ultimate academic goal of being
active thinkers as well as active users of English (Beaumont, 2010, p.2).
Thus, one may argue that critical thinking skills have a place in the classrooms of all kinds
because they afford practice in essential life skills. The critical thinking tasks and their sequence,
originated by Carol Numrich at Columbia University, and defined by Beaumont (2010) as the
practice and development of an active, conscious, purposeful awareness of what one encounters
107
both in the classroom and in the outside world can be adopted for use in this study based on
Numrich‟s Criteria of Critical Thinking Tasks.
A Sequence of Critical Thinking Tasks
Numrich‟s sequence of critical thinking tasks (Table 2.4) below, according to Beaumont (2010),
provides a framework for teachers who want to integrate critical thinking into their lessons. It is
meant to provide useful options for teachers who want to write or adapt classroom materials for
English language learners of all ages and levels of English proficiency. It is not meant to be a
formula for teaching language or critical thinking. It is a starting point from which to consider how
best to meet students‟ needs. This sequence supports student learning and skills development by
gradually increasing the challenge of what language and critical thinking skills they employ. The
task types are flexible and at times overlapping. They can be re-arranged or eliminated altogether
(p.4-5).
Table 2.4. Numrich’s Sequence of Critical Thinking Tasks (Beaumont, 2010)
Perspective
Critical Thinking Tasks
Focus on the
1. Observing
student‟s world
2. Identifying
Assumptions
____________
_________________
Focus on the text
3. Understanding and
Organizing
___________________
4. Interpreting
_______________
Focus beyond the text
______________________
5. Inquiring further
Skills Practiced
Looking, listening, Noticing, Naming
Sharing background
Expressing opinions
Clarifying Values
___________________
Summarizing
Distinguishing relevant details
Ordering
Classifying
Comparing and Contrasting
Explaining cause and effect
________________
Making inferences
Interpreting meaning
Hypothesizing
Theorizing
______________________
Surveying the public
Interviewing a specialist
Researching
___________________
6. Analyzing and Evaluating
_________________
Synthesizing information
108
Critiquing
Reflecting on related ideas
Making logical conclusions
Re-evaluating assumptions
_______________________
______________________
7. Making decisions
Proposing solutions
Problem solving
Taking action
Participating
___________________________________________________________________________________________
( In Beaumont, 2010 : 5 )
As can be seen above (Table 2.4), Numrich‟s sequence of critical thinking tasks contains seven
task types, which can be grouped into three categories: those typically done before presenting the
main text, those done while focusing on the text, and those done after focusing on the main text.
Here, the term text according to Beaumont (2010) refers to the main resource in a lesson or unit.
A text may be a reading (a novel, a page from a history book, a poem), a listening (a radio
interview, a song), an image (a scenic painting, a photo), or multimedia (a film clip, a TV news
report). A text can be any piece of interest to students and teacher alike that is suitable a stimulus
for both language learning and critical thinking.
In order to explore and apply Numrich‟s sequence to the Writing classroom, the course material
for the experimental group also uses different visuals, and readings as a primary text along with
language foci that might be treated with the tasks discussed (Table 2.4) above. For example, the
tasks (see Appendix B) were designed based on Numrich‟s sequence of critical thinking tasks in
Beaumont (2010), and are being carefully scaffolded to be used before, while, and after the main
text of the teaching material for treatment.
Summary of Literature Review
This chapter has made an attempt to review the literature available on the topic of the study.
Thus, the review of literature in this study has identified a number of issues based on a careful
consideration of the quality and applicability of previous research studies related to critical
thinking, critical thinking assessments, academic writing, and the relationship of critical thinking
to academic writing skills, instructional methods and techniques. Studies serve to illustrate that
critical thinking is an important concept that is appropriate, specifically for undergraduate
109
education and taught explicitly in the subject matter in order to develop in learners the ability to
think critically, to perform better academically, and to have better dispositional and cognitive
capacity toward using critical thinking. This is significant for the premise of this research which
proposes to enhance the critical thinking ability, academic writing skills, and the dispositions of
undergraduate students. In addition, the literature review supports the purpose of this research,
which is to examine the effectiveness of explicit instruction in critical thinking on student
achievement in writing academic papers, ability to think critically, and critical thinking
dispositions. The literature review also indicates the relationship of critical thinking to academic
writing, and dispositions. As the findings of the research studies in all of these above areas are
often conflicting, and not always specific enough for the students in the context of EFL, this study
seeks to add to the body of knowledge that explores the relationship of these concepts.
CHAPTER III
The Research Design and Methodology
3.0. Introduction
The purpose of this study, as introduced in chapter one, was to investigate empirically the
effectiveness of explicit instruction in critical thinking strategies on university undergraduate
students‟ abilities to think critically and EFL academic writing skills. Following is the research
design, the research paradigm, institutional setting, the subjects of the study, the research
instruments, the experimental procedure, and data analysis methods discussed in detail.
3.1. The Research Design
3.1.1.Quasi-Experimentation
This research used a 2-group quasi-experimental pretest/posttest control group design. Sections,
not individual students (participants) were randomly assigned to experimental and control
conditions. The essence of the experimental design is the notion that two or more groups are equal
on relevant characteristics before the treatment is applied to one of the groups. Also, in order to
judge whether the treatment has had an effect, the groups are usually compared before and after
110
the treatment. In the ideal situation, outcomes attributed to the treatment occur if there is an effect,
but not otherwise.
Roberts and Taylor (1998) suggest that as the most advanced type of quantitative research,
experimental designs are most likely to show the strength of an association between variables, and
to demonstrate whether changes in one variable cause effects in the other.
As described in the introduction, the research design chosen for this study is quasi-experiment.
This researcher believes that the natural educational setting in which this research is conducted
would warrant the use of a quasi-experimental design. Similar to experiments, quasi-experiments
involve treatments, outcome measures, and experimental units. However, quasi-experiments do
not use random assignments to create the comparisons, which allow for treatment-induced change
to be inferred. In this design, a popular approach to quasi-experiments, the experimental and the
control groups are selected without random assignment. Both groups take a pretest and posttest.
Only the experimental group receives the treatment. In the absence of random assignment, the
groups are considered as non-equivalent, that is, they differ from each other in ways other than
the presence of a treatment. This researcher has, therefore, chosen the quasi-experimental two
group pretest/posttest design from among other types of experimental research, for it uses
assignment of groups to different treatment conditions, and pre-and-posttest to compare groups
(Cook and Campbell, 1979). Quasi-experiment, as an experimental approach to causal research in
field settings, conveys more conviction in inferring causation (Cook & Campbell, 1979).
An important consideration when conducting quasi-experiments is to minimize the loss in the
quality of causal inference in field research. Among other things, according to Campbell and
Cook (1979), this requires the researcher to collect data from noncomparable groups using certain
design frameworks. A number of nonequivalent control group designs for the purpose of quasiexperimentation have been offered by Cook and Campbell (1979). The design used in this study
is adapted from their „Untreated Control Group Design with Pretest and Posttest‟.
The original „Untreated Control Group Design with Pretest and Posttest‟ as described by Cook
and Campbell is shown in Figure 3.1 below.
111
____________________________________________________________________________
E
O1
X
O2
____________________________________________________________________________
C
O1
-
O2
____________________________________________________________________________
Figure 3.1: Untreated Control Group Design with Pretest and Posttest (cook and Campbell, 1979)
In this design, as depicted in Cook and Campbell‟s notational system, „O‟ stands for an
observation, the subscripts 1 and 2 denote the sequential order of recording observations. „X‟
stands for a treatment. The dashed line between the groups indicates that they were not randomly
formed. Such design involves the inclusion of a control group (C). As suggested by Neale and
Liebert(1986), the use of a control group provides a baseline against which the effects of the
experimental treatment may be evaluated. Thus, with a control group, the researcher would have
reasonable confidence that differences between the groups after treatment are, in fact, due to the
treatment. Such optimism, however, should be guarded. The researcher should be aware that in
implementing a nonequivalent control group design, there is no certainty that the groups were
equal before the treatment. To overcome such design deficit, Roberts and Taylor (1998) suggest
the use of a pretest to establish if the groups were initially equivalent on the dependent variable.
Hence, a pretest has been built into the design of this study, which is depicted as „O1‟ in Figure
3.1.
A quasi-experimental pretest/posttest control group design was, therefore, used for this study to
determine whether the critical thinking group of students differed significantly from the
noncritical thinking group. This design was chosen in order to compare the critical thinking
performance between the two intact sections of students, and because it was not feasible to
randomly assign students from one course section to another within the sample.
Though quasi-experiments are designed to render causal inferences as sound as possible,
researchers cannot always control for all the relevant variables in the complex settings of field
research. Often, the validity of the experiment can still be threatened by various extraneous
variables (Tiwari, 1998:128). Cook and Campbell (1979) define validity as “the best available
112
approximation to the truth or falsity of propositions” (p.37). They identify two types of validity:
internal and external. Internal validity refers to the approximation with which the researcher may
infer that a relationship between two variables is causal. External validity refers to the extent that
the presumed causal relationship can be generalized or applied to the population (Roberts and
Taylor, 1998). The internal and external validity of the findings of this study may be threatened
by a number of factors.
Among the threats to internal validity, three are particularly worthy of note in the context of this
study. One of the threats is that of selection. This occurs when nonequivalent groups are growing
at different rates in a common direction, irrespective of the effects of a treatment. This is
particularly a problem when respondents are not selected randomly, as it was the case in this
study. Under this circumstance, treatments were more likely to be given to the brighter
participants. Since such people are usually intrinsically more able or more exposed to
opportunities for change, they are more likely to change faster over time than others not due to the
treatment but due to the selection problems. One way of detecting selection is to compare the
groups in terms of their pretest differences. Selection threats were also reduced by using pretest
scores in the statistical analyses, thereby making it more difficult to detect statistically significant
differences in critical thinking and writing performance between the critical thinking and
noncritical thinking groups.
Another threat to internal validity is that of mortality. This happens when participants dropout
during an experiment due to many possible reasons and the outcomes are thus unknown for this
individuals affecting the study results. Problems such as this can be overcome by taking the
results of those who only completed the tests and attended the class for more than 80% of the
semester classes.
Diffusion of treatment was also a potential threat to internal validity to this study. This happens
when participants in the control and experimental groups communicate with each other. This
communication can influence how both groups score on the outcome. In view of this, the
researcher kept the two groups as separate as possible during the experiment by time and date.
These threats to validity could also be minimized because „academic writing‟ was mandatory for
113
all control and treatment group participants and because „academic writing‟ also took a
considerable student effort, it was less likely that treatment diffusion occurred.
Internal validity can also be threatened by testing. This refers to the situation when the
participants become familiar with the outcome measures and remember responses for a later
testing. Pretesting sensitization was minimized in several ways: The researcher had a larger time
interval (12 weeks) between pretest to posttest administrations of the testing. In addition, neither
the students, instructors, nor the test administrators had access to the correct answers on the tests.
So repeat performance on the posttest was less likely to occur.
The compensatory (or resentful demoralization) according to Creswell (2008) in which the
benefits of an experiment may be unequal or resented when only the experimental group receives
the treatment (i.e, EG receives therapy and the CG receives nothing) is also another concern for
this study for this can highly influence the students‟ results in the end of semester exams,
especially for those who did not receive the training. In response, the researcher provided benefits
to both groups, such as giving the control group the treatment after the experiment ended.
In addition to the threats to internal validity, the external validity of the results of this study may
also be threatened. A threat to external validity is very plausible if the participants are not
randomly selected from the population. Since the participants are not randomly selected, they are
not representative of the population, and the results can only be applied to the portion of the
population from which the sample is drawn and not to the whole population (Roberts and Taylor,
1998; Babbie, 1992). However, since the treatment to groups was assigned randomly, this
problem could be solved.
Having identified the threats to both internal and external validity, the use of quasi-experiment
was considered as appropriate for this study as it allowed this researcher to investigate the causal
relationships as specified. However, careful consideration was given to the various threats to
validity when the findings were analyzed. Plausible alternative explanations were also addressed
as suggested by LoBiondo-Wood and Haber (1998).
114
In summary, the quasi-experimental pretest/posttest control group design was used to minimize
internal and external validity threats and maximize the ability to determine the effects of teaching
critical thinking strategy on student critical thinking, writing abilities, and dispositions toward
thinking critically.
3.1.2 The Study Variables
In this study, the independent variable was an experimental learning strategy in the form of
critical thinking strategy training. The experimental teaching/learning strategy formed/created the
basis of a 48hr module “ A Critical Thinking Approach to Learning EFL Academic Writing
Skills” which was specifically designed for the undergraduate students taking the EFL writing
course for academic purposes. There were, thus, three dependent variables(to be caused by or
influenced by the independent treatment), namely, (1) the students‟ EFL Academic writing
performance, (2) the students‟ abilities to think critically, and (3) their dispositions toward critical
thinking as a result of their experience of the experiment.
3.2. The Research Paradigm
Although philosophical ideas remain largely hidden in the research (Slife and Williams, 1995),
they still influence the practice of research and need to be identified (Creswell, 2009). It is
suggested that individuals preparing a research proposal or plan make explicit the larger
philosophical ideas they espouse (Creswell, 2009). The selection of an experimental
pretest/posttest control group design of this study, therefore, was based on the following
information. This information helped this researcher explain why he has chosen the research
design.
According to Creswell (2009), there are four schools of thought about knowledge:
positivism/post-positivism, Constructivism, Pragmatism, and Advocacy/participatory. These four
perspectives according to Creswell differ in the emphasis they place on the issues that researchers
attempt to deal with. The postpositivist assumptions, for instance, have represented the traditional
form of research, and these assumptions hold true more for quantitative research than qualitative
research. This world view is sometimes called the scientific method or doing science research. It
is also called positivist/postpositivist research, empirical science, and postpositivism. This last
term is called postpositivism because it represents the thinking after positivism, challenging the
115
traditional notion of the absolute truth of knowledge. The postpositivist tradition comes from 19th
century writers and holds a deterministic philosophy in which causes determine effects or
outcomes. Thus, the problem studied by postpositivists reflects the need to identify and assess the
causes that influence outcomes, such as found in experiments. It is also reductionistic in that the
intent is to reduce the ideas into a small, discrete set of ideas to test, such as the variables that
comprise hypotheses and research questions. The knowledge that develops through a
postpositivist lens is based on careful observation and measurement of the objective reality that
exists “out there” in the world. Thus, developing numeric measures of observations and studying
the behavior of individuals becomes paramount for a positivist. Finally, there are laws or theories
that govern the world, and this needs to be tested or verified and refined so that we can understand
the world. Thus, in the scientific method, the accepted approach to research by positivists, an
individual begins with a theory, collects data, that either supports or refutes the theory, and then
makes necessary revisions before additional tests are made (p.7).
The social constructivists, on the other hand, hold assumptions that individuals seek to understand
the world in which they live and work. Individuals develop subjective meanings of their
experiences – meanings directed toward certain objects or things. These meanings are varied and
multiple, leading the researcher to look for the complexity of views rather than narrowing
meanings into a few categories or ideas. The goal of the research is to rely as much as possible on
the participants‟ views of the situation being studied. The questions become broad and general so
that the participants can construct the meaning of a situation, typically forged in discussions or
interactions with other persons. The more open-ended the questioning, the better, as the
researcher listens carefully to what people say or do in their life settings. Often these subjective
meanings are negotiated socially and historically. They are not simply imprinted on individuals
but are formed through interaction with others (hence social constructivism) and through
historical and cultural norms that operate in individuals‟ lives. Thus, constructivist researchers
often address the processes of interaction among individuals. They also focus on the specific
context in which people live and work, in order to understand the historical and cultural settings
of the participants. Researchers recognize that their own backgrounds shape their interpretation,
and they position themselves in the research to acknowledge how their interpretation flows from
their personal, cultural, and historical experiences. The researcher‟s intent is to make sense of (or
interpret) the meanings others have about the world. Rather than starting with a theory (as in
116
postpositivism), inquirers generate or inductively develop a theory or pattern of meaning
(Creswell, 2009:8-9).
In discussing constructivism, Crotty (1998 as cited by Creswell, 2009), for example, identified
several assumptions :
1. Meanings are constructed by human beings as they engage with the world they are
interpreting. Qualitative researchers tend to use open-ended questions so that the
participants can share their views.
2. Humans engage with their world and make sense of it based on their historical and
social perspectives- we are all born into a world of meaning bestowed upon us by our
culture. Thus, qualitative researchers seek to understand the context or setting of the
participants through visiting this context and gathering information personally. They
also interpret what they find, an interpretation shaped by the researcher‟s own
experiences and background.
3. The basic generation of meaning is always social, arising in and out of interaction with
a human community. The process of qualitative research is largely inductive, with the
inquirer generating meaning from the data collected in the field.
Another group of researchers hold the philosophical assumptions of the advocacy/participatory
approach. This position, according to Creswell arose during the 1980s and 1990s from individuals
who felt that the postpositivist assumptions imposed structural laws and theories that did not fit
marginalized individuals in our society or issues of social justice that needed to be addressed.
This worldview is typically seen with qualitative research, but it can be a foundation for
quantitative research as well. An advocacy/participatory worldview hold that research inquiry
needs to be intertwined with politics and political agenda. Thus, the research contains an action
agenda for reform that may change the lives of the participants, the institutions in which
individuals work or live, and the researcher‟s life. Moreover, specific issues need to be addressed
that speak to important social issues of the day, issues such as empowerment, inequality,
oppression, domination, suppression, and alienation. The researcher often begins with one of
these issues as the focal point of the study. This research also assumes that the inquirer will
proceed collaboratively so as to not further marginalize the participants as a result of the inquiry.
In this sense, the participants may help design questions, collect data, analyze information, or reap
117
the rewards of the research. Scholars who support advocacy approach believe that social
constructivists did not go far enough in advocating an action agenda to help marginalized people (
Laws, 2003; Creswell, 2003 ).
The other position about worldviews comes from the pragmatists. Pragmatism as a worldview
arises out of actions, situations, and consequences rather than theory as opposed to what happens
in postpositivism. Pragmatic knowledge claims that the problem is more important than the
method of seeking a solution. Instead of focusing on methods, researchers emphasize the research
problem and use all approaches available to understand the problem. Pragmatism is not
committed to any one system of philosophy and reality. This applies to mixed methods research
in that inquirers draw liberally from both quantitative and qualitative assumptions when they
engage in their research. Pragmatism allows researchers to have a freedom of choice of methods,
techniques and procedures of research that best meet their needs and purposes.
These worldviews, according to Creswell (2003; 2007; 2009; 2012) are shaped by the discipline
area of the student, the beliefs of advisors and faculty in a student‘s area, and past research
experiences. Choosing a paradigm as a basis for methods and research design based on the
issues mentioned above is important. In this study, therefore, the researcher followed the
postpositivists‘ knowledge claim as it allows him to adopt a quantitative methods approach that
leads to the test/verification of a theory.
3.3 Institutional Setting
This study was conducted at Addis Ababa University. The AAU was selected for this study
because the researcher was an adjunct faculty member in the Department of Foreign Languages
and Literature teaching EFL courses at the university for four years now.
3.4 The Research Participants
The participants of the present study were non-major undergraduates (sophomores) who were
taking EFL Academic Writing Skills course to satisfy their general education requirement at
Addis Ababa University. These sample students were purposefully selected for this study because
of the „EFL academic writing course‟ they were taking. Two natural (intact) classes of EFL
118
academic writing students (N= 84) were, conveniently, selected for the present study. One section
(N=43) was randomly assigned to the treatment condition as a critical thinking (experimental)
group, while the other section (N=41) was assigned to serve as the noncritical thinking (control)
group. Only scores from students who had completed both the initial (pretests) and end of course
(posttests) were included in the data analysis.
3.5 The Outcome Measures
Given the complexity of critical thinking (Facione & Facione, 1997; Spicer & Hanks, 1995;
Hickman, 1993), it is unlikely that a single tool can cover all the dimensions of critical thinking.
Therefore, a combination of measurements should be used (Spincer & Hanks, 1995; Ennis &
Norris, 1990). This has the advantage that the strength of each measuring method is reflected in
the overall assessment while the deficiency of one method is compensated by the other (Tiwari,
1998). Assessing students‟ skills and abilities in critical thinking can also be a difficult task for
researchers in the area. According to Wal (1999), however, two main approaches can be taken in
the assessment of critical thinking: (1) by assessing critical thinking in relation to other relevant
academic skills, such as writing, oral presentation, or practical problem solving. (2) by assessing
critical thinking skills as a trait or individual feature of the learner, by inviting the learner to
complete an assessment scale. Based on these two approaches, three outcome measures were used
to evaluate the effects of explicit instruction in critical thinking: (1) Researcher developed
Academic Writing Test, (2) Ennis-Weir Critical Thinking Essay Test (Norris & Ennis, 1989;
Ennis & Weir, 1985; Ennis, 2005), and (3) The California Critical Thinking Dispositions
Inventory (Facione and Facione, 1992).
3.5.1 The Instructor/Researcher-developed Essay Test
The purpose of instructor developed essay test instrument was to examine if students who were
exposed to explicit instruction in critical thinking perform better on a test that requires them to
analyze, interpret and write academic essays(or perform better academically after having received
specific critical thinking strategy) than a group of students who did not receive explicit instruction
in critical thinking.
119
As was documented in the literature review of this study, several general knowledge standardized
essay tests for critical thinking have been developed as alternatives to multiple-choice formats in
attempts to assess students‟ abilities to generate arguments and to capture the open-ended problem
solving nature of critical thinking. The researcher/instructor-developed essay tests (Norris and
Ennis, 1989), and The Ennis-Weir Critical Thinking Essay Test (Ennis and Weir, 1985) are some
examples. Compared with multiple-choice tests, essay tests are more comprehensive and assess
more aspects of critical thinking. They also allow test takers to justify why they make certain
judgment. Hence, those who can think critically but who may have assumptions different from
those of the test constructors are less likely to be penalized as in multiple-choice tests. While
essay tests may be useful, they do have limitations. They are more expensive and time consuming
to administer and score. Special „training‟ is required for those marking the essay tests. As
subjectivity is inevitable in marking essays, so if several markers are involved, inter-raterreliability could be a problem unless special steps are taken to guard against this (Ennis, 1993,
kennedy, Fisher and Ennis, 1991).
An argumentative essay writing test, therefore, was developed and used by the
researcher/instructor to investigate if integrating explicit instruction in critical thinking into
academic writing course results in an improved performance in academic writing skills.
According to Hatcher (1995), some sort of writing such as persuasive or argumentative writing
which have been difficult for the students could be used as an appropriate means of assessing and
comparing general critical thinking and writing skills. Moreover, critical thinking can always be
assessed by its component constructs such as argument analysis and generation, and reflective
judgment (Ennis and Weir, 1985).
An argument is a reason or reasons offered for or against a proposal or proposition. This term
refers to a discussion in which there is disagreement and suggests the use of reasoning and
evidence to support or refute a point. In the critical thinking sense, argument is conducted in a
spirit of good-will, openness to alternative perspectives, and truth-seeking (Paul, 1993). Students
will be asked at varying times to generate, support, and evaluate arguments. Arguments in the
real-world require considerable interpretation (in context), require evaluation of content as well as
form, often have value dimensions, and do not have mechanical decision procedures. This type of
context is one in which someone is trying to defend a point, and in which the defense is usually
120
preceded and succeeded by other argumentation on the point or aspects of it. In this test, a
complex argument is presented to the test taker, who is asked to formulate another complex
argument in response to what is asked first. The test was intended to help evaluate a person‟s
ability to analyze and interpret academic documents and to formulate in writing an argument in
response(i.e., counterarguments, which require careful critical thinking(Sims, 2009:300) thus
recognizing a creative dimension in critical thinking ability.
Academic argumentation writing largely represent a form of exposition that include an element of
persuasion. Therefore, the rhetorical purpose of this writing extends beyond the presentation,
explanation, or discussion to convince the reader of a particular point of view. In argumentation
writing, the writers are required to recognize that issues have at least two sides and present the
facts or information to develop a reasoned and logical conclusion based on the presented evidence
(Hale et al., 1996).
An argumentative essay test, thus, was administered both preceding and following the instruction.
In order to accommodate the importance of context, a context that is familiar to many students has
been provided (see Appendix F). The test composed of a one or a one and half-pages of written
argument in response to the thesis statement given. Test-takers need to develop a paragraph-byparagraph academic argumentation based on the fact the thesis reflects by analyzing and
interpreting their own ideas with the purpose of writing a short-five-paragraph argumentative
essay supporting or refuting the thesis given. Its content and construct validity were evaluated by
the research supervisor and two other university professors.
Thus, the use of the instrument was pilot-tested by this researcher in the semester preceding the
main research project. Two sections of AAU technology students (N=90) took the instrument as a
pretest and posttest (at the start and end of the course). Both sections were taught by the
researcher and only the experimental section received a semester-long explicit and intensive
training in Critical Thinking Techniques. Results from the pilot study were used to estimate what
changes might be expected after a semester-long course and to determine if any revisions were
needed in the instrument or the testing procedures. Two raters were used to score the essay. An
Inter-Rater Reliability (IRR) of the instrument was supported by Intra-Class Correlations (ICC) as
detailed by Hallgren (2012) (see details about reliability below).
121
The Validity and Reliability of Instructor Developed Academic Essay Writing
Skills Test
An Inter-Rater Reliability (IRR) analysis was conducted to assess the degree that coders
consistently assigned the academic essay test ratings to subjects in the study. The distributions of
the essay ratings did not indicate any problems of bias, suggesting that the Rubric for Evaluating
Written Argumentation (REWA) by Facione (2012 update) was provided to the scorers to
exercise freedom from some restrictions in how they evaluate the sufficiency of given academic
arguments. In the research in this study, two raters scored the essay. The Inter-rater reliability was
calculated in R using intra-class correlations (irr package) as detailed by Hallgren (2012). The
intra-class correlation (ICC) was calculated for absolute agreement between the two raters. The
resulting ICC was in the excellent range, ICC= 0.837(Cicchett, 1994:30 in Halgren, 2012),
indicating that coders had a high degree of agreement and suggesting that the essay was rated
similarly across coders. The high ICC suggests that a minimal amount of measurement error was
introduced by the independent coders, and therefore, statistical power for subsequent analyses is
not substantially reduced. This instrument on argumentation Essay was therefore deemed to be
suitable for use in the hypothesis tests of the present study.
3.5.2 The Ennis-Weir Critical Thinking Essay Test
3.5.2.1 General Critical Thinking Performance Assessment
The Ennis-Weir critical thinking essay test from the „Test-Manual‟ named „The Ennis-Weir
Critical Thinking Essay Test‟ by Ennis and Weir (1985) was selected to be used by this study as a
second instrument. This was not without a reason. While there are a number of standardized tests
of essay to test critical thinking abilities, the Ennis-Weir Critical Thinking Essay Test is the most
significant instrument for both teaching and testing (Ennis & Weir, 1985). That is, although
originally conceived as a general critical thinking test, The Ennis-Weir Critical Thinking Essay
Test - An Instrument for Testing and Teaching (Ennis and Weir, 1985) can also be used as the
primary teaching material in a very short course in critical thinking or as an integral part of a
longer course intended to teach critical thinking(as in the case of this study). Further, the EnnisWeir Critical Thinking test is used in the context of argumentation. It is an open-ended test
(because critical thinking is an open-ended activity, Ennis & Weir, 1985) that is intended to help
122
evaluate a person‟s general ability to appraise an argument and to formulate in writing an
argument in response.
As a test, the Ennis-Weir has both instructional and research uses. It can be used as a diagnostic
device to identify specific areas of reasoning or argumentation with which groups of students may
need help. Furthermore, the test can be used as a device for evaluating effectiveness of instruction
in informal logic, critical thinking, or reasoning (Ennis & Weir, 1985). For research purposes, the
test could be used as a basis for comparing control groups and experimental groups in an
experimental study. One might want to investigate, for example, the effects of instruction in
informal logic, science, social studies, or literature on critical thinking ability. Finally, the test
could be used in an exploratory pretest-posttest design, providing educated guesses about the
effects of a “specific” curriculum. The Ennis-Weir Critical Thinking Essay Test (Ennis and Weir,
1985) is used to test the students‟ ability to evaluate an argument and to generate a written
argument in response. This instrument assesses students‟ abilities to respond to arguments as they
occur naturally in discussion, disputation, and debate in the real world.
Moreover, the Ennis-Weir critical thinking essay test(Ennis &Weir, 1985) has been chosen for
this study for it uses a general content(that is, a general-content critical thinking test uses content
from a number of subject matter areas and/or everyday life experiences, content with which most
people at the target level of sophistication can be expected to be familiar). As a “multi-aspect”
critical thinking test, the Ennis-Weir critical thinking essay test assesses more than one aspect of
critical thinking, usually the ones that the test maker feels are the most basic and important for the
level of sophistication. It incorporates getting the point, seeing the reasons and assumptions,
stating one‟s point, offering good reasons, seeing other possibilities(including other possible
explanations), and responding to and avoiding equivocation, irrelevance, circularity, reversal of
an if-then(or other conditional) relationship, overgeneralization, credibility problems, and the use
of emotive language to persuade. It is also intended to be used for both formative and summative
evaluation (Ennis & Weir, 1985, Ennis, 2005).
The purpose of using the Ennis-Weir critical thinking essay test was to examine if students would
be able to demonstrate general reasoning abilities on an everyday issue after having received
specific instruction in critical thinking techniques. Because the Ennis-Weir, according to the
123
developers of the test (Ennis and Weir, 1985) is a general test of critical thinking ability in the
context of argumentation. It is primarily a test of critical thinking ability, not writing ability. One
should, therefore, focus on the quality of thinking in the written responses, rather than on mode of
expression (P. 1).
The test is composed of a one-page letter written to the editor of a newspaper urging the adoption
of an ordinance that would prohibit overnight parking on public streets. The letter consists of
eight numbered paragraphs. Test-takers develop a paragraph-by-paragraph analysis of the testletter with the objective of writing a short essay supporting or refuting each argument in the letter
as well as a summary paragraph(e.g., paragraph number nine) evaluating the argument presented.
A scoring sheet is provided by the test developers containing criteria for scoring each of the nine
paragraphs written in response to the letter; according to the scoring sheet, student scores can
range from -9 to +29. Maximum time recommended for the test is 40 minutes.
Possible concerns with using the Ennis-Weir as a general test of critical thinking include issues of
both reliability and validity. Reliability was initially established by having essays written by 27
college students midway through a college-level introductory informal logic course, and 28 gifted
eighth-grade students of English graded by two different graders. Inter-rater reliability of .86 and
.82 respectively, were obtained and these are sufficiently high correlations for an essay test of this
type.
Reviews of the Ennis-Weir have been generally favorable with some reservations. Tompkins
(1989) considered it useful for testing for critical thinking ability and commended the authors for
developing an “open-ended and content-specific test that allows students to respond to the
arguments presented in the test in a variety of ways” (p.29). She also noted the realistic nature of
the test as a measure of critical thinking but criticized the paucity of validity and reliability data
provided in the test manual. Werner (1991) pointed out that “in assessing both evaluative and
productive aspects of critical thinking, the test… provides a… holistic and naturalistic picture of
critical thinking skills” (p.495). On the other hand, Werner found that the open-ended nature of
the test contributed to a relatively subjective and time-consuming scoring process. Poteet (1989)
noted its limitations as a norm-referenced test but indicated support for its use as an “informal
assessment…..in the area of critical thinking” (p.290).
124
The Ennis-Weir has been used successfully in a variety of situations (Davidson and Dunham,
1996; Hatcher, 1995; Taube, 1997) and has received strong expert support. In a personal
conversation(April, 22, 1997), M.N. Browne, author of “Asking the Right Questions”(Browne
and Keeley, 1994) and a member of the Delphi Panel of Experts(Facione, 1990), stated that he has
used the Ennis-Weir test as a classroom exercise and supported its use as a standardized,
nationally-recognized test of general reasoning ability on an everyday issue. In his experience,
the Ennis-Weir works well in a pretest/posttest design, although he noted that some students at the
end of a semester long course devoted to developing critical thinking skills “see things more
richly” than the Ennis-Weir is able to discriminate, indicating a possible ceiling effect.
D. L. Hatcher, Director of the Center for Critical Thinking at Baker University in Baldwin,
Kansas, similarly reported using the Ennis-Weir for six years to assess the critical thinking
abilities of all Baker students at three points in their college career: as entering freshmen, at the
end of a year-long critical reading and writing course, and at the end of their senior year (Hatcher,
1995; personal communication, May 13, 1997). Hatcher expressed satisfaction with the EnnisWeir as an appropriate means of assessing and comparing general critical thinking and writing
skills. Hatcher stated that Baker‟s best students score around 20 of a possible 29 points on the
Ennis-Weir, indicating no problems with a ceiling effect. He noted that raters need to be carefully
trained and inter-rater reliability should be checked.
In spite of its limitations(see above), the Ennis-Weir Critical Thinking Essay Test was considered
to be the most acceptable essay instrument for testing students‟ general reasoning abilities to
evaluate an example of argumentation and to respond in argument form. Two years before it was
piloted for this research study, the researcher taught and tested the Ennis-Weir Critical Thinking
Essay Test in English Academic writing courses (with Information and Sport Science students) at
the participating institution. The results were evaluated by the instructor as adequate. This
background/experience helped this researcher to use the Ennis-Weir to measure students‟ general
critical thinking abilities both before and after a semester long instruction in critical thinking.
Thus, the Ennis-Weir Critical Thinking Essay Test instrument was pilot-tested by this researcher
in the semester preceding the main research project. Two sections of AAU technology students
(N=90) took the instrument as a pretest and posttest (at the start and end of the course). Both
125
sections were taught by the researcher and only the experimental section received a semester-long
training in Critical Thinking Techniques. Results from the pilot study were used to estimate what
changes might be expected after a semester-long course and to determine if any revisions were
needed in the instrument or the testing procedures. An Inter-Rater Reliability (IRR) was
conducted to check if it is suitable for the present study (see the test‟s reliability and validity
below). Two raters were randomly selected and trained. Each essay was scored by two raters, the
English instructors. The researcher was also there with the raters to provide them with some help,
where necessary. Scoring procedures were discussed, then each rater scored the essay individually
using the criteria on the score sheets (see App. I) and suggestions for scoring the test were
provided in the Ennis-Weir Test Manual. At several points in this process, when tests were being
scored, the raters compared scoring results and reread, discussed, and rescored essays(when
differences occur between them) on a scale of -9 to +29. Both raters scored each essay, providing
a mean score. Inter-rater reliability was then calculated using Intra-class correlations (IRR
package) as detailed by Hallgren(2012).
For the present study, the researcher revised the grading procedure according to Ennis and Weir
to maximize scoring accuracy. Essays were scored blind. Each essay was coded and identified by
these codes only, and essays were randomly stacked so that section and group (experimental,
control) was unknown to the raters. Both raters scored each essay during a single scoring session.
Raters scored five to ten essays individually and then compared scores. In the relatively few
instances when differences in scores exceeded one point (when an individual rater alters the way
he/she applies the scoring criteria), the essay was reread, discussed, and rescored by each rater.
Each rater kept an individual scoring sheet, providing an average score.
The Validity and Reliability of the Ennis-Weir CTET
An Inter-Rater Reliability (IRR) analysis was conducted to assess the degree that coders
consistently assigned Ennis-Weir essay test ratings to subjects in the study. The distributions of
the essay ratings did not indicate any problems of bias, suggesting that the rubric provided by the
test developers allows the scorers to exercise some latitude in how they evaluate the sufficiency of
given arguments. However, when evaluated in a college-age population, inter-rater reliability has
been observed to be from 0.72 to 0.99(Ennis, 2005). In the research in this study, two raters
scored the E-W performance. The Inter-rater reliability was calculated in R using intra-class
126
correlations (irr package) as detailed by Hallgren(2012) on a subset of two tests. The intra-class
correlation (ICC) was calculated for absolute agreement between the two raters on all 18
measures of these two tests (i.e., 2 tests x 9 paragraph scores per test = 18 measures). The
observed ICC between the two raters was 0.788. Although the variable of interest contained a
small amount of error variance due to differences in subjective nature of the test and ratings given
by coders, the ratings were judged as adequate. This instrument (the E-W) was therefore thought
to be suitable for use in the hypothesis test of the present study.
The Ennis-Weir essay test of critical thinking seemed to have content validity because content
validity, refers to the ability of a test to capture a measure of the intended domain. In the case of
the Ennis-Weir Critical Thinking Essay Test, the specified domain is “critical thinking” as
defined by Ennis and Weir (1985) and a number of studies discussed above. Critical thinking, as
defined by Ennis and Weir (1985), the APA Delphi study(Facione, 1990), is also a construct
which integrates a number of cognitive maneuvers known to be a component of this type of
human reasoning process. These maneuvers are included in the APA Delphi study report as
embedded concepts. The test instrument fully assessed the constructs of interest (i.e., the
aspects/areas of critical thinking). The test was construct valid since it suitably measured the trait
or theoretical construct of critical thinking skills that it was intended to measure according to
refined theory in this study, and it demonstrated a strong correlations with other measures
included and used in this study (for more details, see correlations in the Results Chapter).
3.5.2.2 Component Critical Thinking performance Assessment
As was supported by a number of researchers and the authors themselves, the Ennis-Weir Critical
Thinking Essay Test was used by this researcher to objectively evaluate the impact of explicit
instruction in critical thinking on student‟s general reasoning ability in the context of
argumentation on an everyday issue. Therefore, one should focus on the quality of general critical
thinking ability in the students‟ written responses. As a “multi-aspect” critical thinking test, the
Ennis-Weir critical thinking essay test, however, assesses more than one aspect of critical
thinking, usually the ones that the test maker feels are the most basic and important for the level
of sophistication. It incorporates getting the point, seeing the reasons and assumptions, stating
one‟s point, offering good reasons, seeing other possibilities(including other possible
explanations), and responding to and avoiding equivocation, irrelevance, circularity, reversal of
127
an if-then(or other conditional) relationship, overgeneralization, credibility problems, and the use
of emotive language to persuade. It is also intended to be used for both formative and summative
evaluation (Ennis & Weir, 1985, Ennis, 2005).
The design of the E-W further allowed this researcher to go along a slightly different line to
examine student‟s performance in different components of critical thinking as displayed in his/her
written responses in the Ennis-Weir. This analysis allowed for a rough separation of the core
critical thinking skills employed, and how well student test-takers demonstrated in their
performance the major components of critical thinking while evaluating the popular letter article
and generating a written argument in response in accord with what was taught explicitly during a
semester long intensive training in critical thinking. Specifically, students‟ abilities to: (1)
understand the particular problem in the letter (interpretation), (2) identify central arguments in
the letter(analysis), and (3) assess the credibility of these arguments using the logic in the
previous steps to decide what to believe or do (evaluation), (4) understand the consequences of
that decision(inference), (5) communicate the process of one‟s own thinking clearly, concisely,
accurately and deeply to others(explanation), and (6) engage in an introspective process of
evaluating one‟s own thinking and remaining open to changing one‟s own beliefs and
opinions(self-regulation) were assessed as displayed in the students‟ written responses on the
Ennis-Weir Critical Thinking Essay instrument.
This researcher, thus developed an instrument for use with other essay question “Use your critical
thinking skills to discuss and critique the letter article” (see Appendix I) to evaluate the level of
performance in different components of critical thinking and the test-taker‟s strength in such skills
as interpretation, analysis, evaluation, inference, explanation and self-regulation as displayed in
students‟ written responses on the Ennis-Weir in both experimental and control groups.
Assessments results were analyzed by the two raters and the researcher in terms of the grading
rubric devised by the researcher based on the Recommended performance Assessments for the
CCTST Scale Scores(100-point version) by the team of Critical Thinking Experts at the university
of California(see Table 4.1 below). Students‟ level of performance in each skill was divided into
five sub-skills – each sub-skill was assigned a number from 1 to 5 depending on the quality of the
response (superior=5, strong=4, moderate=3, weak=2, not manifested/very weak=1, (or absent
128
skill=0) – thus each skill received a score out of a possible 25 points. As with the previous tests,
this score was then normed for comparison with the other skill scores. The instrument was first
pilot-tested with all six core critical thinking skills essay questions (interpretation, analysis,
evaluation, inference, explanation, and self-regulation) by assessing the entire response by each
and individual student in both experimental and control groups. The instrument was revised with
“self-regulation” critical thinking skill rejected due to the lack of evidence in students‟ written
responses.
As a family of the California Critical Thinking Skills Test, the level of performance for five of the
six critical thinking skills was thus measured according to the Recommended Performance
Assessments for the CCTST Scale Scores (100-point versions) to identify strengths and/or
weaknesses in all critical thinking skill areas (except „self-regulation‟ skill) (see Tables 3.1 & 3.2
below).
Table 3.1: Recommended performance Assessments for the CCTST Scale Scores (100-point
versions).
CT Skills
Recommended
Not
Manifested
Performance
Weak
Assessments
Moderate
Strong
Analysis
Interpretation
Inference
50 – 62
63 – 69
70 – 78
79 – 8 5
Evaluation
Explanation
Induction
Deduction
Source: CCTST Test Manual 2013 Insight Assessment/the California Academic Press.
Superior
86 – 100
Table 3.2: Descriptions of Recommended Performance Assessment of the CCTST Overall
Scores
Superior (86-100%): This result indicates critical thinking skill that is superior to the vast majority of testtakers. Skills at the superior level are consistent with the potential for more advanced learning and
leadership.
Strong (79-85%): This result is consistent with the potential for academic success and career
development.
Moderate (70-78%): This result indicates the potential for skills-related challenges when engaged in
129
reflective problem-solving and reflective decision-making associated with learning or employee
development.
Weak (63-69%): This result is predictive of difficulties with educational and Employment related
demands for reflective Problem solving and reflective decision making.
Not Manifested (50-62%): This result is consistent with possible insufficient test-taker effort, cognitive
fatigue, or possible reading or language comprehension issues.
Source: CCTST Test Manual 2013 Insight Assessment/the California Academic Press.
This manual refines the cut scores to include both the recommended performance assessment of
Not Manifested to indicate very weak scores (not consistent with expected scores for the intended
test-taker group), and Superior to identify those test-takers who score among the best in the
intended test-taker group.
For the CCTST Overall Score reported on a 100-point version, a score of 86 and higher indicates
a consistent strength in all critical thinking skill areas and for this reason is designated as superior.
Scores in this range are associated with strong preceptor ratings and work performance and are
indicative of leadership potential. On this same 100-point version of the CCTST, scores less than
70 display weak overall skill or no manifestation of critical thinking skill and have been
associated with poor performance educationally, in the workplace, and on professional licensure
examination (Insight Assessment 2013, p.29).
3.5.3. The California Critical Thinking Dispositions Inventory (CCTDI)
The purpose of using the CCTDI was to determine if students who received explicit instruction in
critical thinking techniques differ in their attitudes or dispositions toward using their critical
thinking abilities from students who did not receive similar instruction after the experiment. The
California Critical Thinking Disposition Inventory (CCTDI) (Facione and Facione, 1992) is the
first instrument designed to measure the dispositional Aspects (dimensions) of critical thinking.
And, as described earlier, the CCTDI is conceptually grounded in the Delphi Report on Critical
Thinking (American Philosophical Association, 1990).
130
3.5.3.1 The Seven CCTDI Dispositional Scales
The CCTDI is a 75-item Likert scale tool with seven scales. In their positive manifestation, these
seven scales are: truth-seeking, open-mindedness, analyticity, systematicity, critical thinking selfconfidence, inquisitiveness, and cognitive maturity. Following are some definitions of the seven
scales as was found in Tiwari (1998, pp. 139-142).
With reference to the CCTDI, truth-seeking is conceptualized as “being eager to seek the best
knowledge in a given context, courageous about asking questions, and honest and objective about
pursuing inquiry even if the findings do not support one‟s self-interests or one‟s preconceived
opinions” (Facione, Sanchez and Facione, 1994, p.6). The truth-seeking scale in the CCTDI
measures intellectual honesty, the desire to seek the best knowledge, the inclination to ask
challenging questions, and the willingness to pursue reasons and evidences wherever they lead.
As a scale in the CCTDI, Open-mindedness refers to “being tolerant of divergent views and
sensitive to the possibility of one‟s own bias” (Facione, Sanchez and Facione, 1994, p.5). Thus,
open-mindedness measures one‟s tolerance for new ideas and divergent views. A positive
disposition to open-mindedness is manifested as an inclination to monitor one‟s own thinking for
possible bias, and a willingness to respect the rights of others to hold different opinions.
Analyticity is about “prizing the application of reasoning and the use of evidence to resolve
problems, anticipating potential conceptual or practical difficulties, and consistently being alert to
the need to intervene" (Facione, Sanchez and Facione, 1994, p.6). The analyticity scale measures
one‟s alertness to potential difficulties and sensitivity to the need to intervene. Someone who is
disposed to analyticity would be inclined to value the use of reasons and evidence in solving
problems.
“Being organized, orderly, focused, and diligent in inquiry” (Facione, Sanchez, and Facione,
1994, p.6) is a feature of systematicity. As a scale of the CCTDI, systematicity measures the
inclination to be organized, focused, diligent and persevering. A person with a disposition toward
systematicity would plan his/her approaches in problem-solving in focused and organized ways,
and work with complexity in an orderly manner.
131
Critical thinking self-confidence, as a scale of CCTDI, refers to the faith that one has in one‟s
own reasoning processes. It is suggested that critical thinking self-confidence “allows one to trust
the soundness of one‟s own reasoned judgments and to lead others in the rational resolution of
problems” (Facione, Sanchez and Facione, 1994, p.6). The critical thinking self-confidence scale
measures trust in one‟s own reasoning and ability to guide others to make rational decisions.
Inquisitiveness is “one‟s intellectual curiosity and one‟s desire for learning even when the
application of the knowledge is not readily apparent” (Facione, Sanchez and Facione, 1994, p.5).
In this sense, an inquisitive person is one who is curious, eager to acquire knowledge, and
desirous of being well informed. This person is also inclined to ask such questions as „Why?‟,
„What is this?‟ and „How does it work?‟ The scale of inquisitiveness in the CCTDI measures
intellectual curiosity and the intention to learn even if the knowledge has no immediate
application.
The cognitive mature person is characterized as someone who “approaches problems, inquiry,
decision making with a sense that some problems are necessarily ill-structured, some situations
admit of more than one plausible option, and many times judgments must be made based on
standards, contexts and evidence which preclude certainty” (Facione, Sanchez and Facione, 1994,
p. 7). Recognizing that one must be judicious in one‟s decision making, one would be inclined to
look beyond simplistic and absolutistic points of view and be prudent in making, suspending, or
revising judgments. At the same time, a cognitive mature person recognizes the need to reach
closure at times in the decision making process even in the absence of complete knowledge.
Cognitive maturity as a scale of the CCTDI measures judiciousness which inclines one to see the
complexity in problems and to desire prudent decision making, even in uncertain conditions.
3.5.3.2. The Validity and Reliability of the CCTDI
The California Critical Thinking Dispositions Inventory (CCTDI; Facione and Facione, 1992)
was developed to measure one‟s inclinations or dispositions toward critical thinking. It was
created using a consensus definition of critical thinking produced by a panel of experts using
Delphi procedures (Facione, 1990). It is comprised of 75 items (see App. A ) to which students
indicate their level of agreement or disagreement on a six-point Likert scale. It takes 20-30
minutes to complete. Items are divided among seven scales representing different dispositions of
132
the critical thinker. These are truth-seeking, open-mindedness, analyticity, systematicity, selfconfidence, inquisitiveness, and cognitive maturity. The maximum score for each scale is 60.
According to the authors, a score lower than 40 indicates that the individual is weak in that
disposition whereas someone who scores higher than 50 is strong in that disposition (Facione &
Facione, 1992). The maximum total score possible on CCTDI is 420. According to Facione and
Facione, an overall score of 350 or more indicates relative strength on each of the seven scales. A
score below 280 indicates overall weak dispositions to critical thinking. Cronbach‟s alpha
reliabilities of the CCTDI have been reported as between .90 and .91 overall across high school
and college students, and scale reliabilities range from .72 to .80. Information has not been
reported for test-retest reliability. Content validity is based on claims that items are derived from
the consensus description of dispositions of critical thinking by the 46 experts involved in the
Delphi Report. Claims of predictive and construct validity have been questioned in a review by
Callahan (1995), but she concluded that the instrument is useful for certain purposes, if, for
example, appropriate caution is used to match items and research questions.
Thus, the use of the California Critical Thinking Dispositions Inventory (CCTDI) was pilot-tested
by this researcher (see Table 3.3 below) in the semester preceding the main research project. Two
sections of AAU technology students took the instrument as a pretest and posttest (at the start and
end of the course). Both sections were taught by the researcher and only the experimental section
received a semester-long training in Critical Thinking Techniques. Results from the pilot study
were used to estimate what changes might be expected after a semester-long course and to
determine if any revisions were needed in the instrument or the testing procedures. The internal
consistency reliability of the CCTDI was supported by Cronbach‟s alpha as shown in Table 3.3.
Cronbach alphas for the seven individual scales in the CCTDI pilot administration ranged from
.68 to .75 on the pretest (N=90) and .72 to .78 on the posttest (N=90). The Cronbach‟s alpha
reliabilities for the overall instrument was .89 on the pretest and .92. The alpha levels in these
samples, thus provide empirical support to the internal consistency reliability of the CCTDI
identified by the developers of the instrument.
133
Table 3.3. Internal Consistency Reliability of the CCTDI
Scale
Pretest
Mean
Truth-Seeking
Open-Mindedness
Analyticity
Systematicity
CT Self-confidence
CT Inquisitiveness
Cognitive Maturity
CCTDI Total
39. 49
38. 88
46. 16
41. 67
46. 88
43. 91
43. 77
300.76
Cronbach’s
alpha
.73
.73
.69
.73
.74
.75
.68
.89
Posttest
Mean
41.73
39.90
46.89
43.16
44. 56
44. 06
43.29
303.59
Cronbach’s
alpha
.77
.74
.79
.75
.72
.78
.78
.92
As the CCTDI is the first objective instrument to measure the dispositional dimensions of critical
thinking, convergent validity studies are only just emerging. For example, Sanchez (1993) has
reported significant correlations supporting the concurrent validity between individual CCTDI
scales and established psychological scales targeting those constructs.
As the CCTDI involves attitudinal measures, there is always the concern that social desirability
response bias may threaten the test results. Empirically there is no evidence to suggest that social
desirability is a threat to the validity of the CCTDI scores. This suggests that social desirability
response bias accounts for less than 1% of the variance in the CCTDI scores (Private
correspondence with Dr. Noreen Facione, one of the test constructors of the CCTDI, 10
December 1997).
To reduce the likelihood of respondents giving socially desirable responses, the scale items are
interspersed in the CCTDI, and the names of the seven scales are not revealed in the instrument
(see app. The name of the instrument is given only by its initials and no connection is made to
critical thinking.
As the CCTDI is conceptualized in North America (Facione, 1990), whether it is suitable for
university students (respondents) in Ethiopia deserves careful consideration. Several steps were
taken in this study to ensure that the issue of culture sensitivity was not overlooked in using a
„Western‟ instrument. Prior to the administration of the instruments, the CCTDI were submitted
to a research supervisor and two instructors in the discipline of Psychology. They were invited to
judge the appropriateness of the items in terms of their compatibility with the local values and
134
customs. The panel members were satisfied that the English version of the CCTDI were
compatible with the local norms and values, and can also be used by any culture.
3.6. The Rating Scale for Test Instruments
A scoring sheet was developed according to which the raters were able to make more valid,
reliable, and consistent assessment. Graders were provided with a set of suggested criteria and
scoring instructions to rate a student‟s performance on tests. They were encouraged to use
judgment in applying the criteria, and to add or subtract points for unspecified insights or errors.
That is, the approach adopted, was more likely to allow for more flexibility and also more
freedom.
3.6.1. The Instructor-developed Test
The scoring sheet/checklist was adopted from an analytic descriptors of written argumentation
(Gittens, 2011) Insight Assessment (www.insightassessment.com). This Rubric for Evaluating
Written Argumentation (REWA) help the two raters evaluate the students‟ academic essays
locally developed by the researcher/instructor and addresses or analyzes eight different aspects of
sound and effective writing (see App. G). The items on the list are: Purpose and Focus; Depth of
thought; Thesis; Reasoning; Organization; Voice; Grammar and Vocabulary; Mechanics and
presentation. Level descriptors that describe the levels of performance are 0-3. That is, the rating
scale ranged from 0(non-scorable) to 3(highly developed) or (3= highly developed; 2= developed;
1=Underdeveloped; NS (0) = Substandard), were specified for each item in the score sheet
according to which the raters assigned scores to the students‟ essay writing. To compute the total
score, student responses were added and totaled for each student and normed to a percentage of
the total score possible. For example, a student who received “developed” for all 8 levels of
performance would score (2x8) =16 out of 24, which translates to 100% (i.e, 66.6 per cent) when
normed.
3.6.2. The Ennis-Weir Critical Thinking Essay Test
For the Ennis-Weir, the Criteria and Scoring sheet (see App. I), developed by the test authors
themselves were used in this study to assess students‟ ability to evaluate an argument and to
generate a written argument in response. The rubric allowed the scorers to exercise some latitude
135
in how they evaluate the sufficiency of given arguments. The Ennis-Weir Critical Thinking Essay
Test has a maximum score of -9 to 29 points (see Table… for details). Performance in an EnnisWeir did not have an impact on a student‟s grade in class, and it was scored only as a validated
measure of critical thinking. The specific skills assessed in the E-W are presented in Appendix I.
Paragraph by paragraph, the student is required to evaluate the sufficiency of the argument
presented. The E-W includes an extensive and specific scoring rubric where students receive 1 to
3 points (for paragraphs numbered 1 to 8) and 5 point (for paragraph #9) for correctly evaluating
and judging the reasoning presented in Mr. Raywift‟s editorial. Students can also lose a point for
judging an argument incorrectly or for demonstrating bad judgment in his or her response. For
example, a student who scored 2 out of a possible 3 points in each of the first eight paragraphs,
and 3 out of 5 points in paragraph 9 received 19points out of the maximum 29. As with the
previous tests, this score was then normed to a percentage of the total score possible for
comparison. Thus, when translated, this became 65.52 %.
3.6.3. The California Critical Thinking Dispositions Inventory (CCTDI)
With respect to the California Critical Thinking Dispositions Inventory (CCTDI), the researcher
conducted a factor analysis on the original disposition instrument that composed of 75 items
under seven constructs because the seven dispositions measured by the CCTDI were highly
correlated with each other (Rick Rudd, personal communication, February 10, 2005). Then,
students were asked to rate themselves on a six-point Likert scale from 0 (Strongly Disagree) to
5(Strongly Agree) (see Appendix…). To compute the total score, student responses were totaled
for each construct and normed to a total score possible. For example, a student who circled
“Agree” for all 12 items of „Truth-seeking‟ questions would score 36 out of 60.
3.7 The Research Hypothesis: the null hypothesis
The central hypothesis of this study (the null form) is that there is no significant difference
between the experimental and control groups in relation to the outcome measures before and after
the treatment.
By following the common practice of expressing hypotheses in null form, this researcher is aware
that such convention incorporates several potential dangers and should not be accepted
uncritically. For example, the researcher may find that the data show no relationship or effect
136
according to the chosen statistical tests and criteria. In other words, the researcher cannot reject
the null hypothesis. However, such null result may be due to other explanations rather than the
real experimental effects. These other misleading explanations may exist because: (1) the trend
that the researcher believes does not really exist, (2) the sample of cases and observations include
in the analysis is biased, (3) there are insufficient cases in the sample to detect the trend, and (4)
the measurement chosen has a very high or low inherent variability (Polit & Hungler, 1995).
Despite the risk, it is thought that expressing hypotheses in null form is appropriate in this study.
The decision is based on the following rationale. With regard to the hypotheses about the
relationship between the independent and the outcome measures, this researcher has some hopes
and preferences, but no evidence to assert or deny that the experimental intervention (that is,
Integrating Explicit Instruction in Critical Thinking into an Undergraduate Academic Writing
Course) would make a difference to one or some of the students‟ critical thinking skills,
dispositions, and performance in writing for academic purpose since no published research
exists(so far to this researcher‟s knowledge) for predicting the direction of the outcome. Thus, the
decision to express the hypotheses in null form in the present study is considered to be justified.
In this study, the level of confidence is set at 95%, that is, there is a 5% probability that the results
could have happened by chance. With a level of probability of 0.05(that is, p= .05), the null
hypothesis is rejected if the difference is significant (i.e., p = .05, and/or p < .05), or not rejected if
the difference is insignificant (i.e., p > .05). Therefore, an alpha level of 0.05(Sig. 2-tailed) was
set to determine the output values of significance.
3.8. The Experimental Procedure
As discussed earlier in this Chapter, sections were randomly assigned to experimental and control
groups. Tests for academic writing skills, the Ennis-Weir Critical Thinking Essay, and the CCTDI
were used to collect data at two points in time (pretest and posttest). The researcher taught two
sections of the three group academic writing students in the department of Economics, one section
as the experimental and the other as control group. The experimental and the control groups both
received 150 minutes of classroom instruction per week for one semester (i.e. nearly 13 weeks).
At the end of the first week of classes, pretesting began. Regular and experimental activities,
including administration of pretest and posttest instruments lasted for 13 weeks.
137
During the semester preceding the main study, the instructional methods and materials, and
assessment instruments intended for use in the research project were pretested (pilot-tested). The
primary purposes for testing these aspects were to provide the researcher with additional practice
and experience in engaging students in the instructional treatment in order to enable a smooth
transition into the actual experiment, to provide the instructor and other raters with experience in
scoring the tests, and to identify possible problems (if there is any) with the instruments or the
way they were administered. It was also meant to reveal any significant problems with student
reactions to both the instructional program and assessment instruments.
In response, several adjustments were made to the preliminary study (pilot test). The instructor
decided to eliminate the issue of gender due to small sample size of female students in each
section. Interview questions of students„ critical thinking dispositions(attitudes) and scoring
procedures for them were also modified from qualitative to quantitative type. Modifications were
made to the Critical Thinking Packet to include definitions of critical thinking and critical
thinking skills, definitions of the elements of reasoning, and information on thinking fallacies.
The instructor also decided to require only two or three selected readings from each part of the
module in the source reader instead of complete chapters due to time limit. Some instructional
methods were also modified based on students‟ frustration.
Student motivation could potentially influence the accuracy of data from these instruments. To
increase students‟ motivation to do their best on the various assessments used in this study, points
used in the calculation of final course grades were assigned for each instrument (five points for
the pretest). Toward the end of the course, the instructor generally explained the rationale for
taking the tests and students were told that data obtained from these instruments would help
faculty improve instruction for subsequent students. When students took the Ennis-Weir and the
CCTDI as posttests, they received five additional points. However, the academic writing skills
tests (both pretest and posttest) were considered part of the continuous course assessment and
scores were reported later.
The instructor met the two sections participating in the study at 3:00 on Monday and Wednesday,
and Friday and Saturday 1:30-3:30. The 3MW section met the instructor one campus, while the
138
Friday and Saturday (1:30-3:30 P.M) section met the instructor on the other campus. These
sections were assigned to these different campuses of the university by the department not the
instructor. However, the two sections were randomly assigned to the experimental and control
groups by campus by the instructor. The 3MW became the experimental group, and the Friday
and Saturday section became the control group.
Approximately five hours of course time was spent on testing related to the study, and the rest of
the time was spent on regular course activities and experimental training. The next section
describes the instructional methods (or instructional procedures) and materials used with both
experimental and control groups.
3.9 Instructional Method and Materials
3.9.1 Experimental Group
In order to investigate the probable effects of integrating explicit instruction in critical thinking
into regular academic writing course on university undergraduate EFL learners‟ abilities to think
critically and write thoughtfully in the disciplines, students were randomly assigned to
experimental and control sections. The APA Delphi Report (Facione, 1990) on Teaching for
Critical Thinking and Assessment was used as the basis for the experimental treatment in this
study. The instructor infused the techniques into the content of academic writing course in
experimental section by (a) teaching the techniques explicitly, (b) training students how to use the
techniques to analyze, synthesize, and evaluate texts, (c) providing critical thinking classroom
support materials(including handouts, models) of the instructional techniques, (d) conducting
Socratic discussions according to the elements and standards set forth in the instructional
techniques, (e) giving classroom activities/assignments and allowing time to practice each skill,
utilizing both written and oral techniques, and evaluating students‟ performance.
Student participants used two academic EFL writing materials (see Table 3.4 below). One of
these materials, however, was adapted by the instructor/researcher for use in the experimental
section. Students in both experimental and control groups used the same readings. Classroom
activities, and assignments were also the same (except incorporating critical thinking activities
into subsequent classroom activities and writing assignments explicitly to experimental section),
and based on the reading documents. Each material contains multiple documents from different
139
sources representing divergent viewpoints, and questions that probe factual understanding and
critical thinking.
During the week following the completion of pre-instruction tests, the instructor distributed a
packet (package) of critical thinking skills to experimental group students and began introducing
it, and emphasizing what it is in it and why it counts. The packet includes: definitions of critical
thinking and critical thinking skills; effective techniques for building critical thinking skills and
habits of mind; Critical Thinking Classroom Support Materials (i.e., Critical Thinking Toolbox –
a four part chart showing the relationship between elements, standards, skills/abilities, and
dispositions (attitudes) and help students develop critical thinking skills for better performance in
their academics and in everyday life situations), and their definitions. The Critical Thinking
Toolbox (from which students can choose a strategy to apply to a particular question or activity)
was borrowed from Paul‟s model (Foundation for Critical Thinking, 1996) and explicitly
addressed to students in the experimental section. The toolbox(Paul‟s model) includes
skills/abilities of critical thinking (i.e. interpretation, analysis, inference, evaluation, explanation,
and self-regulation(exceptional to Delphi Report)), the elements of reasoning: Purpose of the
thinking (goal, objective), Question at issue or problem to be solved, Concepts (e.g., theories,
definitions, principles), Information (data, facts, observations), Points of View (frame of
reference, perspective), Inferences and Interpretations (conclusions, solutions), Assumptions, and
Consequences and Implications. The instructor listed these skills and elements of reasoning being
emphasized on the board, defined each, and used them in the reading and writing activities. The
instructor asked students to use the “Toolbox” from which they can choose a strategy to apply to a
particular quandary or question. For example,
1.
If a student is trying to ascertain why a character in a novel is acting in a certain way, an
inferencing (skill) strategy that assists students in drawing conclusions and reading between
the lines might be the most appropriate tool to apply.
2. If a student is being asked to evaluate and defend the worthiness of a novel, the students will
necessarily have to apply a variety of critical thinking strategies to this task – a student will
have to examine his/her own thinking (skill), return to the novel for evidence (an element)
that supports their thinking, and utilize an appropriate format (written, oral) to communicate
their thinking.
140
Paul‟s model was selected as one of the most critical thinking classroom support
materials(toolbox) for the reason that it diagrams the important elements of reasoning, universal
intellectual standards used to assess student reasoning, traits (dispositions) of the reasoning mind,
and abilities/skills of critical thinking all in one. Paul presents his approach to critical thinking as
a general model of reasoning that can be applied to any problem or issue requiring reasoning. It
was chosen from among several alternative models (for example, Adler, 1982; Browne and
Keeley, 1994; Halpern, 1996; King, 1994) because of its applicability to document or text
analysis, because it incorporates critical thinking standards, and because it addresses students‟
dispositions in the development of their critical thinking skills. It can also be infused into any
academic content and has the additional advantage of being useful for thinking about academic
subjects and/or everyday issues (Reed, 1998). A graphic summary of the toolbox (Paul‟s model)
is presented in Appendix A (Critical Thinking Packet).
From the very first week of the course, the instructor began emphasizing critical thinking in the
experimental section. First, the instructor made use of scaffoldded specific critical thinking
strategies (designed to gradually progress from the basic to the complex) beginning with basic
questioning strategies, and then build to develop higher-order critical thinking skills/abilities. That
is, by asking questions like those listed in Table 2.3 above and Appendix B (CT Tasks). The
instructor raised the quality of classroom discussions from simple information sharing and
opinion giving on the topic toward the level of analysis, interpretation, inference, explanation,
evaluation and self-regulation (see the Tasks prepared, Appendix B) and then toward writing
their assignments. Students then participated in a Socratic discussion on the question, for
example, “What is Academic Writing?” Then, the instructor presented academic writers‟
strategies that required the use of higher order thinking skills as investigated and described by
Swales (1990a); Swales and Feak (1994): Academic writers are expected to evaluate their sources
and the opinions expressed in them critically; evaluate information and the author‟s tone;
analyzing and explaining data; summarizing; paraphrasing. Academic writers are also expected to
signal their own views on the topic, issue, or author‟s tone after summarizing the information
obtained from published sources such as books, articles, reports, or print news.
141
The students were assigned to read images and the document (reading text) that follow, applying
the elements of reasoning to their reading for the next class period. Students were told that they
would receive credit based on their efforts to complete the assignments and that the class would
work in small groups to better understand the assignment. During the next class meeting, students
were put in groups of three or four to share their findings on the images/photos, readings that
follow and to analyze the credibility of information related to the issue or topic. While students
were working collaboratively, the instructor checked students‟ papers and gave individual credit
where appropriate. Student groups were then called on to share their findings with the whole
class, and discussion followed on how well or poorly the assigned readings supported each other‟s
viewpoints. This activity served to help students better understand what kinds of reasoning were
being expected of them as well as to improve their comprehension of academic writing skills.
To further familiarize students in the experimental section with the critical thinking skills and
techniques for critical thinking, the instructor listed aspects of critical thinking on the board and
introduced a current topic into class discussion. Then students were asked to read and to analyze
and/or evaluate the information covered-up by the author of the text based on the elements of
reasoning, and critique the credibility of the information, and the rhetorical strategies and features
the author used in the essay. This activity served to increase students‟ familiarity with the toolbox
(Paul‟s model) they were being asked to use to analyze reading sources, and then write their
assignments using aspects of critical thinking in the toolbox. This illustrated the broad
applicability of the critical thinking classroom support materials.
The instructor encouraged students in the experimental section to use the standards included in
the toolbox in class discussions and in written work throughout the semester to evaluate their
thinking and be able to give someone a full look at the big picture: both “to state and to justify
that reasoning in terms of the evidential, conceptual, methodological, criteriological, and
contextual considerations upon which one‟s results were based; and to present one‟s reasoning in
the form of cogent arguments” (Facione, 1998, p.6) www.insightassessment.com. For example, a
student who used the word „academic‟ in connection with the Academic Writing was asked to
clarify (a standard) the concept (an element). A student assigned to evaluate an “academic essay”
during out of class assignment might be asked (in class) to further explain what evidence (an
142
element) might be relevant (a standard) to the issue, or asked to broaden (a standard) his/her
perspective and consider another point of view (an element).
To a lesser extent (both less explicitly and less frequently), students were encouraged to use
critical thinking dispositions by making use of the language of thinking a familiar vocabulary. For
example, the instructor modeled strong critical thinking for students by saying: “Let‟s be
systematic (disposition) in our analysis (skill)” and by encouraging students to be open-minded
(disposition) to find or consider another point of view (element) supported intellectual empathy
and open-mindedness. Further, each of these important dispositions or traits encourages a critical
thinker to assess his/her own thinking. Students were also introduced to self-assessment in the
structured controversy and the required essay. Students received copies of the grading standards
and explicit instruction on how the grading standards reflected the elements of reasoning and
intellectual standards.
Reasoning fallacies were also addressed on occasions as they appeared in the readings selected
for analysis. Students in experimental section were regularly asked to check the credibility of
sources.
Examples
of
faulty
assumptions,
questionable
analogies,
equivocation,
overgeneralization, emotional language, and insufficient evidence were readily in the documents.
In the experimental class, in addition to having Socratic discussions on the given issues, a typical
class period also included lecture (for no more than 10-15 minutes for a 50 minute class period)
and some kind of student and teacher modeling activity. The lecture format of learning is a
popular approach to content delivery in higher education; however, it frequently does not
encourage active learning or critical thinking on the part of students. In the context of critical
thinking instruction, the main goal is to actively do something with and reflect on the meaning of
what has been done. Yet, it is useful for students to gain some exposure to the materials and
methods through pre-class readings and overview lectures. As part of strategy instruction, I also
conducted a “Think aloud” technique (in an effort to model and reveal how I think critically
about, for example, a passage from the text) of assigned readings. For instance, the passage
says…, I think….. And so I can guess… As a whole class group. I then asked the class to practice
inferencing with several passages or texts from a given paper.
143
During the remainder of the treatment, students were asked to refer to the sections of the packet
for various classroom activities and assignments or to review it before tests. Additionally,
students in the experimental group were encouraged to use information in the packet in readings
outside the academic writing course, whether for academic assignments, every day issues with in
the local community, or leisure readings.
In summary, in an effort to improve students‟ critical thinking and academic writing abilitieis, the
instructor employed an integrated explicit instructional approach with the following textbooks and
other additional source readings:
1. Textbooks:
1. Teaching Academic ESL Writing: Practical Techniques in Vocabulary and Grammar
(Hinkel, 2004),
2. The Write Stuff: Thinking Through Essays (Sims, 2009),
2. Additional Source Readings as Assigned by the Instructor:
1. Two sample student argument essays (Danger Around Every Corner; Overpopulation),
2. Two sample professional argumentative/persuasive essays/term papers (Hate speech in a
College Campus; Tourism in Ethiopia),
3. Two research article summaries
4. Two newspaper articles on some “Hot issues,” (for example, analyzing newspaper
editorials,
and critiquing campaign speeches)
Collectively, these assessments required students to actively engage with different aspects of
critical thinking (for example, interpreting, analyzing, synthesizing, evaluating, inferencing, selfregulation etc) skills and to expand students‟ metacognition through completion of reflection
assignments and classroom activities.
Appendix S(see pp.270-277) contains a summary of the course information, course goals and
objectives, critical thinking activities and assignments, assigned source readings, critical thinking
144
instructional techniques for experimental group, and illustrative lesson plans for both
experimental and control groups.
3.9.2 The Control Group
Students in the control group used the same academic writing course material(s) as the
experimental group and prepared by the university department of foreign languages and literature
(JU). On the first day of class in the control section, the instructor provided students with
materials and methods by explaining academic writing and the various concepts of academic
writing by providing definitions and examples.
Assigned readings for the control group were the same as those for the experimental group, but
the two groups used different instructional approaches for analyzing and evaluating reading
sources. Instead of training students to use the “Reasoning skills,” to analyze and interpret
sources, the instructor assigned students in the control class to complete the questions at the
beginning and end of the sources. The questions were well-written and appropriate for the
students‟ levels. However, to be successful in answering all of these questions, students needed to
use many of the elements of reasoning (or critical thinking skills) made explicit in the critical
thinking classroom support materials (Paul‟s model) and to draw on academic writers‟ strategies
for understanding and interpreting sources in academic papers or essays. For example, some of
the questions required students to examine divergent point of view, to clarify important concepts,
to make inferences, to use information and evidence to formulate arguments. The essential
differences between the approaches to source analysis used in the control group and in the
experimental group were the explicitness of training in the critical thinking provided to the
experimental group. Students in the experimental group were explicitly taught how to use the
techniques and the critical thinking classroom support material (i.e., Paul‟s model) that provided
appropriate general questions to ask about any document.
A typical class period included lecture (for more than 30 minutes for a 50 minute class period)
and some kind of student activity, typically a discussion of assigned source readings.
Occasionally, more detailed (longer) lectures were given in control section than time spent
explaining or familiarizing students in the experimental group with instructional techniques for
critical thinking. In control section, class discussion focused more on factual information and was
145
taught more didactically. Every effort was made to keep activities identical in the control and
experimental groups except for the critical thinking training materials. As in the experimental
group, the control group was given several assignments, regular activities that required the use of
higher order thinking skills in students‟ writing for academic purpose.
Testing throughout the semester was the same in control and experimental classes. Other
assignments, including a group position paper written by participants in the small group following
the structured controversy, an essay, and daily assignments, were the same as those in the
experimental group. Tests and other assessments are described in more detail in the previous
section. Grading procedures were also the same for experimental and control groups.
Table 3.4. provides a summary of comparison of the instructional method and materials used in
each group. It should be apparent that both the experimental and control groups were exposed to a
variety of documents (or readings) in academic writing and were given assignments requiring the
use of higher order thinking. Yet, the experimental section was given explicit instruction and
training in critical thinking according to the APA Delphi Research Report in addition to the
activities and assignments required of the control group.
Table 3.4. Comparison of Instructional Methods and Materials for Experimental and
Control Groups
Methods and Materials
Academic
Writing
Material(s)
Assigned readings
Lectures
Experimental
Skills Same
same
Occasional (with no more than 1015 minutes for a 50 minute class
period)
Instruction in Critical Thinking
Approximately 2:00 hours of direct
instruction (for a 2:40hr class
period a Week)
Critical Thinking Packet
Yes
Critical Thinking Instructional Explicit(direct), scaffolded, and
techniques
intense, with divergent questioning
of higher levels of analysis,
evaluation, synthesis, inference,
and Socratic discussion (students
often challenge each other)
Control
same
same
Extended
course
None
throughout
the
No
Traditional core subject matter
instruction/Implicit.
Convergent questions of lower
levels
of
knowledge,
comprehension,
and
application were frequently
used.
146
Analysis
of
academic
papers/essays
Academic writers’ strategies
Analysis of readings
Frequently practiced in and out of
class
Emphasized throughout the course
With
emphasis
on
logical
reasoning, and in-depth reflective
dialog, and then to reflective
writing
Tests, grading procedures and Same
rubrics
content
Small
Other
Critical
Thinking The use of Critical thinking toolbox
Classroom
(Paul’s model) when writing
support materials
assignments, photographs/pictures
for analysis, checklists for self-andpeer assessment of one’s own CT.
Critical thinking training
Yes
Duration of the training
Same
Not often
Introduced
With focus on some factual
questions and answers
Same
large
None
No
Same
The duration of the training was also the same for both control and experimental groups. In her
review of the issue in the strategy research at large, Machon (in press) as cited in Cohn and
Macaro (2007:247), the duration of the program (longer programs-at least between 10 and 15
weeks-seem to produce better outcomes than shorter ones). Accordingly, the training was
conducted for an average of 13 weeks (a semester). After 11 weeks of teaching/learning
processes, the first post-instruction test on Academic Writing skills course was carried out. The
topic of writing was identical to that for the pre-instruction test. After the experimental course of
12 weeks, the second (the Ennis-Weir CTET) and third (the CCTDI) posttests were administered
between experimental and control groups. Then, after the coding and marking of the tests data by
the two raters, the analysis was made using appropriate statistical techniques.
3.10. Method of Data Analysis
To address the specific research questions previously stated in Chapter I, a number of books on
statistics and university statisticians were consulted and the use of descriptive and t-test for
independent samples was suggested for data analyses. Descriptive analysis of data for variables in
a study includes describing the results through means, standard deviations, and range of scores
(Creswell, 2007:228). Descriptive statistics are also used to summarize achievement scores at the
beginning (pretest) and end (posttest) of the course by the method of instruction. The rationale for
using the parametric test is as follows: (1) a parametric test will have a greater chance of picking
147
up differences between the groups which a non-parametric test may fail to do. (2) based on the
central limit theorem, when the scores are added together (as in the outcome measures used in this
study), they tend to resemble the normal distribution, which is the most important consideration in
determining the use of a parametric test (Pallant, 2011). T-test was selected for analysis because it
proportionally compares means while testing significance of differences between two groups
(Creswell, 2009). Hence, descriptive statistics and T-test were run for each research instrument
used in this study. An analysis of descriptive statistics test provided insights regarding differences
in the mean scores and the standard deviation of the overall effects of the intervention, while a
statistically significant difference between the mean scores on performances of the two groups on
posttest was, therefore, assessed using Independent Samples T-tests.
In this chapter, sections on each instrument follow and contain statistical tables and commentary
about results from descriptive statistics and independent-samples t-test from each instrument. The
columns labeled t, df, and Sig. (2-tailed) provide the standard “answer” for the t-test. They provide
the value of t, the degrees of freedom (number of participants, minus 2, in this case), and the
significance level (often called P). Finally, to help determine the practical significance of these
results, an effect size was calculated using Cohen‟s d. One of the simplest and most popular
measures of effect size is Cohen‟s d. Cohen‟s d is a member of a class of measurements called
“Standardized mean differences.” In essence, d is the difference between the two means divided
by the standard deviation (Brian, 2008:103). It is not only a popular measure of effect size, but
has also suggested a simple basis to interpret the value obtained. Cohen‟s d has two advantages
over other effect-size measurements. First, its burgeoning popularity is making it the standard.
Thus, its calculation enables immediate comparison to increasingly larger numbers of published
studies. Second, Cohen‟s (1992) suggestion that effect sizes of 0.20 are small, 0.50 are medium,
and 0.80 are large is not only a popular measure of effect size but enables a simple basis to
interpret the value obtained and to compare an experiment‟s effect-size results to known
benchmarks. Cohen‟s d will also be used as the preferred measure of effect size for t-tests.
While Cohen‟s d is the appropriate measure of effect size for t-tests, correlation and regression
effect sizes should be determined by squaring the correlation coefficient. This squared correlation
is called the coefficient of determination (r2) (Brian, 2008, p.105). Cohen (1988) suggested here
that correlations of 0.5, 0.3, 0.1 corresponded to large, moderate, and small relationships.
148
According to Cohen, those values squared yield coefficients of determination of 0.25, 0.09, and
0.01 respectively. It would appear, therefore, that Cohen is suggesting that accounting for 25% of
the variability represents a large effect, 9% a moderate effect, and 1% a small effect. Thus, these
values of effect size have become critically important to be used in the interpretations of
correlations in this study.
In an independent samples t-test, a difference is said to be significant if the output value under
Sig.(2-tailed) is equal to or smaller than 0.05(we, then, reject the null hypothesis), and
insignificant(not significant) if the output value under Sig.(2-tailed) is larger than 0.05(we fail to
reject the null hypothesis). Normally, the “Equal variances assumed” row is used (Brian, 2008;
Creswell, 2009).
3.11. A Short Summary of the Pilot Study
A pilot study using a small group of students from Addis Ababa University was conducted. The
pilot study had two main objectives. The first objective was to test or check the appropriateness of
the research questions and instruments under investigation, the teaching material prepared for this
study, and to refine them where possible and to indicate the usefulness of undertaking the main
study. The second objective was to use the results of the pilot study as a spring board for the main
study. As already mentioned, Addis Ababa University was selected as a site for the pilot study
using convenient (purposive) sampling technique.
The subjects of the pilot study were students of writing for academic purposes at Technology
faculty. Of the two-intact sections (n=90) students selected for the treatment, 45 students were
experimental group. Males constituted 60% (n=27) of the sample, while females constituted 40%
(n=18) of the sample students. And the other 45(males = 71.1% (n=32), and (females = 28.9%
(n=13) sample students were control groups. These two groups of students would be considered
as representatives of the two types of students (experimental and control) that will be used for the
main study of this research. Sections, but not participants were normally randomly assigned to
experimental and control conditions prior to the experimental manipulation stage. The point of
doing this, as indicated in the previous sections of this study, was to insure that groups receiving
149
different treatments are as reasonably equal or similar in any way that could possibly impact the
outcome (or dependent variable).
A two-group quasi-experimental pretest/posttest control group design was employed for students
in an experimental group (n=45), and students in a control group (n=45) who were taught in the
more traditional lecture methods. Prior to the design, implementation, and evaluation of the
experiment, a literature review was carried out. The literature review examined the theory and the
practice of learning critical thinking both in general and EFL education.
An experiment for the pilot test would, then be conducted for a semester-long by infusing the
newly prepared instructional methods in critical thinking into teaching material (for experimental
group) and the more traditional one (for control group). The experiment of the pilot study was
conducted by the researcher. The question at the heart of the study was – to what extent is explicit
instruction in critical thinking responsible to enhancing students‟ critical thinking abilities and
learning process in the EFL contexts. The main objective of this study was to test the impact of
the instruction on three outcome measures.
Statistical results from three data collection instruments, namely, the researcher developed
Academic Essay Writing Skills Assessment, the Ennis-Weir general Critical Thinking Essay Test
(Ennis-Weir, 1985), and the California Critical Thinking Dispositions Inventory (CCTDI, Facione
& Facione, 1992) were pilot-tested by this researcher in the semester preceding the main research
project. Two intact sections of AAU technology students took the instruments as pretest and
posttest (at the start and end of the course), and the results were compared to examine differences
between the treatment and control groups, and male and female students in the experimental
group. Descriptive statistics, Paired-Samples T-test, and T-test for Independent Samples were
conducted as methods of data analyses. The Paired-samples T-test (also called a dependent t-test)
compared the means of two scores from related (same) samples (for example, scores for
experimental male and female students) while the independent-samples t-test compares the means
of two independent samples (for example, experimental and control groups).
Thus, achievements in Academic Essay Writing Skills assessment, the General Critical Thinking
Ability(Ennis-Weir, 1985) test, and Critical Thinking Dispositions(as measured by California
150
Critical Thinking Dispositions Inventory, CCTDI) were found significant between the
experimental and control groups. The experimental group outperformed the control group.
However, no significant difference was found between male and female students in critical
thinking ability test in experimental group.
The validity and reliability of the three outcome instruments were also conducted to assess the
degree that coders consistently assigned the test ratings to subjects (n=90) in the pilot study, and
to examine if these outcome measures were suitable (valid and reliable) for use in the main study.
In the research in this pilot study, two raters scored the test instruments in accordance with the
Rubric provided by the researcher. The Inter-rater reliability was calculated in R using intra-class
correlations (irr package) as detailed by Hallgren (2012) for the results of academic essay writing
skills, and general critical thinking ability assessments (Ennis-Weir, 1985) while the internal
consistency reliability of the CCTDI was supported by Cronbach‟s alpha(Facione & Facione,
1992) for absolute agreement between the two raters.
The resulting ICC for Academic Essay Writing Skills test instrument (locally-developed by the
researcher) was 0.620(on the pretest), and was in the excellent range, ICC = 0.837(on the
posttest), indicating that coders had a high degree of agreement and suggesting that the essay was
rated similarly across coders. The high ICC suggests that a minimal amount of measurement error
was introduced by the independent coders, and therefore, statistical power for subsequent
analyses is not substantially reduced. This instrument on Academic Essay Writing Skills was,
therefore, deemed to be suitable (reliable) for use in the hypothesis tests of the present study. Its
content and construct validity were evaluated by the research supervisor and two other university
professors.
Moreover, the observed ICC between the two raters was 0.540(on the pretest), and 0.788(on the
posttest) on the Ennis-Weir Critical Thinking Essay Test instrument for Critical Thinking
ability(Ennis & Weir, 1985). Although the variable of interest contained a small amount of error
variance due to differences in subjective nature of the test and ratings given by coders, the ratings
were judged as adequate. This instrument (the E-W CTET) was, therefore, thought to be suitable
for use in the hypothesis test of the present study (see Chapter 3 for the details). The Ennis-Weir
essay test of critical thinking seemed to have content validity because content validity, refers to
151
the ability of a test to capture a measure of the intended domain. In the case of the Ennis-Weir
Critical Thinking Essay Test, the specified domain is “critical thinking” as defined by Ennis and
Weir (1985) and a number of studies discussed above. Critical thinking, as defined by Ennis and
Weir (1985), the APA Delphi study(Facione, 1990), is also a construct which integrates a number
of cognitive maneuvers known to be a component of this type of human reasoning process. These
maneuvers are included in the APA Delphi study report as embedded concepts. The test
instrument fully assessed the constructs of interest (i.e., the aspects/areas of critical thinking). The
test was construct valid since it suitably measured the trait or theoretical construct of critical
thinking skills that it was intended to measure according to refined theory in this study, and it
demonstrated a strong correlations with other measures included and used in this study (for more
details, see correlations in the Results Chapter).
The use of the California Critical Thinking Dispositions Inventory (CCTDI) was also pilottested by this researcher in the semester preceding the main research project to determine if this
instrument was reliable and valid for use in the main study. The internal consistency reliability of
the CCTDI was supported by Cronbach‟s alpha (as shown in Table 3.3). Cronbach alphas for the
seven individual scales (i.e., truth-seeking, open-mindedness, analyticity, systematicity, critical
thinking self-confidence, inquisitiveness, cognitive maturity) in the CCTDI pilot administration
ranged from .68 to .75 on the pretest (N=90), and .72 to .78 on the posttest (N=90). The
Cronbach‟s alpha reliabilities for the overall instrument was .89 on the pretest, and .92 on the
posttest. The alpha levels in these samples, thus provide empirical support to the internal
consistency reliability of the CCTDI identified by the developers of the instrument (see Chapter 3,
the outcome measures for details).
Table 3.3. Internal Consistency Reliability of the CCTDI
Scale
Pretest
Mean
Truth-Seeking
Open-Mindedness
Analyticity
Systematicity
CT Self-confidence
CT Inquisitiveness
Cognitive Maturity
CCTDI Total
39. 49
38. 88
46. 16
41. 67
46. 88
43. 91
43. 77
300.76
Cronbach’s
alpha
.73
.73
.69
.73
.74
.75
.68
.89
Posttest
Mean
41.73
39.90
46.89
43.16
44. 56
44. 06
43.29
303.59
Cronbach’s
alpha
.77
.74
.79
.75
.72
.78
.78
.92
152
Content validity is based on claims that items are derived from the consensus description of
dispositions of critical thinking by the 46 experts involved in the Delphi Report. Claims of
predictive and construct validity have been questioned in a review by Callahan (1995), but she
concluded that the instrument is useful for certain purposes, if, for example, appropriate caution is
used to match items and research questions.
In sum, a highly important aspect in this pilot test is that both the instructional method and the
outcome measures used are quite suitable for further use in the main study.
Summary of the Research Design and Methodology
This chapter described the procedures obtaining the research sample and selecting the
instruments. It also reported the research design and experimental procedures, as well as the
method of data analysis. A 2 group quasi-experimental pretest/posttest design was employed
using two intact sections, one of which was randomly assigned to the experimental group, while
the other served as a control group. The two groups were used as the research sample of this
study. Three instruments were used as pre-and-post-instruction measures. The experimental group
took instructional activities and assignments in critical thinking techniques different from the
control group, which used the traditional one. These instructional techniques and critical thinking
classroom support materials were integrated into Academic EFL Writing course and taught
explicitly for experimental group. Both experimental and control classes were taught by the
researcher. Descriptive statistics and independent-samples t-test were run for each instrument to
analyze data. Main effects and interactions were examined for significant differences, and scores
for each instrument were then correlated.
153
CHAPTER IV
THE RESULTS OF THE STUDY
4.0. Introduction
The purpose of this study was to assess empirically the effectiveness of integrating explicit
instruction in critical thinking techniques into an undergraduate EFL academic essay writing
course on student achievevement in critical thinking skills, writing academic essays/papers, and
critical thinking dispositions. Specifically, it tested the hypothesis that current literature proposes
– “Students will be better able to demonstrate critical thinking and perform better academically
after having received specific critical thinking strategy training”. The independent variable in this
study was the instructional techniques for critical thinking for EG, and instruction that did not
include this for CG) and the three dependent variables were: critical thinking skills, academic
essay writing performance and students‟ dispositions toward critical thinking. The outcome
variables were also scores obtained on the following three outcome measures (or instruments):
Instructor/researcher developed argumentative essay writing test; the Ennis-Weir Critical
Thinking Essay Test, and the California Critical Thinking Dispositions Inventory. Moreover,
information about the relationships of these issues was computed.
This chapter reports the results of the study as they relate to the research questions and hypotheses
formulated. A description of the sample, followed by an overview of the data analysis procedures
is provided in the study. Then results from each of the three instruments are presented in turn.
Statistical analyses were run with the help of SPSS software, version 16.0 (Statistical Analysis
System, 2007).
This chapter Concludes with a summary of the results of the instruments between the students in
the Experimental and Control groups.
4.1 Description of Sample
The means and standard deviations for the sample on each of the three pretest and posttest
instruments are presented in Table 4.1. They are presented for the total number of students that
completed all aspects of the course and the study assessments (N = 84): Experimental=43, and
Control=41, thus providing the research participants. An examination of descriptive statistics and
154
visual analysis (see the histograms for each instrument) indicated that distributions of sample
scores were normally distributed with bell-shaped curve and reasonably symmetrical distributions
in both experimental and control groups. Both samples appeared relatively normally distributed.
Table 4.1: Distribution of Mean Pretest and Posttest Scores by Group and Research
Instruments
Instrument
AWST
Ennis-Weir CTET
CCTDI (Total)
Experimental
(n = 43)
Pretest
M
46.27
41.62
283.58
Control
(n = 41)
Posttest
SD
11.68
6.18
31.06
M
70.45
68.16
302.30
Pretest
SD
6.87
7.78
23.56
M
45.84
42.81
283.46
Posttest
SD
8.28
6.90
25.44
M
57.93
52.48
286.22
SD
10.08
6.66
14.93
The results of descriptive statistics (as illustrated in Table 4.1) show that the mean pretest of the
experimental group closely resemble the mean pretest scores observed in the control groups,
indicating that the two groups had similar background before the experimental intervention.
Compared with the pretest, a very different picture, however, emerged at the posttest between
experimental and control groups. The experimental group had higher pretest to posttest gains on
all instruments than the control group that had a slightly smaller gains from pretest to posttest.
4.2. Analysis of Data and Interpretation
4.2.1. Effect of Explicit Instruction in Critical Thinking on Student Achievement in
Writing Academic Papers /Essays
Research question and related Hypothesis:
The following research question and corresponding null hypothesis were addressed in this section
RQ1.
Will a group of undergraduate EFL academic writing students who receive explicit
instruction in critical thinking techniques perform better on a test that requires them to interpret,
analyze, and write an argumentative paper on a set of different topics than a group of students not
receiving explicit instruction in critical thinking techniques?
The purpose of this question was to examine if students who were exposed to explicit instruction
in critical thinking techniques perform better on a test that requires them to analyze, interpret and
155
write academic essays than a group of students not receiving explicit instruction in critical
thinking. Specifically, it tests the following null hypothesis:
Research Hypothesis:
H01: There will be no significant difference in the mean posttest Academic Writing Performance
scores as measured by an argumentative essay writing test between students who received explicit
instruction in critical thinking techniques and students who did not receive training in critical
thinking techniques.
4.2.1.1 Descriptive Statistics
Descriptive Statistics (Table 4.2) shows the distribution of Mean Pretest and Posttest scores for
academic writing skills test between the experimental and control groups. Descriptive statistics,
including Visual analyses (Fig.5.1 and Fig.5.2), suggested that scores were reasonably
symmetrical. The skew and kurtosis, which are -2.44 and 0.596 at the pretest, and -3.09 and 0.63
at the posttest respectively for these scores is within the normal bounds of normality (+1 to -1/+2
to -2) in both experimental and control groups. The less peaked, bell-shaped curve toward
experimental group sample scores were more positively skewed than the control group
distribution indicating that scores were clustered to the left at the low values, but with an
acceptable degree(Skewness = -2.44 and -3.09). Kurtosis values were less than 1(0.596 and 0.63)
in both pretest and posttests in each group. These samples can be considered normally distributed.
Table 4.2. Distribution of Mean Pretest and Posttest scores on Academic Essay Writing Skills
Test
Pre-test Scores
Instrument
AWST
Post-test Scores
Group
N
Mean
SD
Mean
SD
Experimental
43
46.2651
11.68295
70.4440
6.86820
Control
41
45.8417
8.27809
57.9268
10.08024
The results of descriptive statistics (as illustrated in Table 4.2) show that the mean pretest of the
experimental group closely resemble that of the mean pretest scores observed in the control
groups (Experimental: M=46.2651, Control: M=45.5417), but compared with the pretest a very
different picture emerged at the posttest. Posttest scores were significantly higher in the
156
experimental group than in the control (EG: M=70.4440, CG: M=57.9268). There was an increase
in the students‟ gains from pretest to posttest in both groups. The largest gain, however, was noted
in the experimental group than in the control. Although the control group had improved its scores
and made a remarkable change from pretest to posttest, it was not as substantial as that of the
experimental group.
Fig.4.1. Distribution of mean pretest academic writing skills test total scores between
experimental and control groups.
157
Fig.4.2. Distribution of mean posttest academic writing skills test total scores between
experimental and control groups.
4.2.1.2 Independent-Samples T-Test
An Independent-Samples T-Test was used to determine if group means significantly differed in
posttest performance between the experimental and control groups as a result of instructional
techniques from each other. The t-values were examined for significance at the level of 0.05 (2tailed) alpha. Thus, the mean posttest (Table 4.4) scores of academic essay writing skills test of
the experimental and control groups are statistically summarized following the hypothesis:
1.1.2 Hypothesis Related to the Posttest of the Experiment
H01: There will be no significant difference in the posttest between the experimental and
control groups with respect to performance in EFL academic essay writing.
Table 4.4: Analysis of the Independent Samples Test for achievement in academic essay writing
skills test at the end of the experiment between experimental and control groups
Posttest
Group
N
Mean
Std. Dev.
SE mean
E
43
70.4440
6.86820
1.04739
C
41
57.9268
10.08024
1.57427
t-value
df
P (sig.2-tailed)
6.679
82
.000*
Significant at p < 0.05(2-tailed)
An independent-samples test comparing the mean scores of the experimental (E) and control (C)
groups found a significant difference between the means of the two groups t(82) = 6.679, P <
.05(sig.2-tailed). The mean of the experimental group was significantly higher (M = 70.86820,
SD = 6.86820) than the mean of the control group (M = 57.9268, SD = 10.08024). The effect size
of the difference in outcome on the academic writing skills test was calculated at Cohen‟s d
=1.46, indicating a very large significant effect size.
158
4.2.2 Effect of Explicit Instruction in Critical Thinking on Student Achievement in General
Critical Thinking Ability
Research question and related Hypothesis
The following research question and related null hypotheses were addressed in this section
RQ2.
Will a group of undergraduate EFL academic writing students who receive explicit
instruction in critical thinking techniques perform better on general critical thinking ability test as
measured by Ennis-Weir Critical Thinking Essay Test than students who did not receive explicit
instruction in critical thinking techniques?
The purpose of this question was to examine if students who were exposed to explicit instruction
in CT perform better on critical thinking ability test than a group of EFL undergraduate students
who did not receive explicit instruction in critical thinking. The Ennis-Weir is primarily a test of
critical thinking ability, not writing ability. One should focus on the quality of thinking in the
written responses, rather than on mode of expression. One should understand what the examinee
has written and whether it does or does not satisfy the criteria (provided in the rubric by Ennis &
Weir, 1985; Ennis, 2005). Thus, the purpose of this question was to examine the quality of
general critical thinking ability in the students‟ written responses in some major areas of critical
thinking competence that the test covers. Specifically, it tests the following null hypothesis:
Research Hypothesis:
H02: There will be no significant difference in the mean posttest critical thinking skills scores as
measured by Ennis-Weir Critical Thinking Essay Test between students who received explicit
instruction in critical thinking techniques and similar students who did not receive training in the
same way (CTT).
4.2.2.1 Descriptive Statistics:
Descriptive Statistics (Table 4.5) shows the distributions of Mean Pretest and Posttest scores on
the Ennis-Weir Critical Thinking Essay Test in the experimental and control groups. Descriptive
statistics, including Visual analyses (Figures 4.3 and 4.4), suggested that scores were normally
distributed with the form of bell-shaped curve and reasonably symmetrical distributions in both
experimental and control groups. Scores were more negatively skewed to the right for both groups
159
at pretest indicating the clustering of scores at the high end (right-hand side of the graph).The
control group distribution was more positively skewed to the left than the experimental groups at
the posttest, indicating that scores were clustered to the left at the low values. A more peaked
distribution of scores were also indicated in pretest scores than posttests. Kurtosis values were
computed to be 0.94 and -2.38, which is within the range of +2 to -2 and accepted by the authors
as a more stringent criterion when normality is critical. Scores were also skewed (-1.42 and
0.095) within the accepted range of normality.
Table 4.5. Distribution of Mean Pretest and Posttest Scores on students’ abilities to Think
Critically on Ennis-Weir Critical Thinking Essay Test.
Pre-test Score
Post-test Score
Instrument
Group
N
Mean
SD
Mean
SD
E-W CTET
Experimental
43
41.6174
6.1774
68.1633
7.7762
Control
41
42.8076
6.8952
52.4798
6.6573
The results (Table 4.5) indicate that there was little difference in the mean pretest scores in both
experimental and control groups. The control group had slightly higher mean scores at the pretest
(M=42.8076) than the experimental group (M=41.6174). This amount of difference, however,
was so small indicating that the two groups were equal before the treatment. Although the mean
gains from pretest to posttest were big in both experimental and control groups, these mean
posttest gains on the Ennis-Weir were significantly higher for experimental students (a mean
score of 68.1633) than for the control (a mean score of 52.4798). This amount of difference was
so large between the two groups indicating that the experimental group performed better on this
instrument than the control group after the experiment.
160
Fig. 4.3. Distribution of mean pretest Critical Thinking Ability Test total score in Ennis-Weir
CTET between experimental and control groups.
Fig.4.4. Distribution of mean posttest critical thinking ability test total scores in the Ennis-Weir
Critical Thinking Essay Test between Experimental and control groups.
161
4.2.2.2 Independent Samples T-Test
An Independent-Samples T-Test was used to help determine if group means differed significantly
in posttest performance as a result of method of instruction (CTT) from each other. The t-values
were examined for significance at alpha = 0.05(2-tailed). Thus, the mean posttest (Table 4.6)
scores of the Ennis-Weir Critical Thinking Essay Test of the experimental and control groups are
statistically summarized following the hypothesis:
2.1.2 Hypothesis Related to the Posttest of the Experiment
H02: There will be no significant difference in the mean posttest critical thinking
essay test scores between the experimental and control groups.
Table 4.6. Analysis of the Independent Samples Test for achievement in critical thinking ability
test between Experimental and Control groups at the posttest of the experiment
Group
E
N
43
Mean
68.1633
Std. Dev
7.77621
SE mean
1.18586
C
41
52.4798
6.65730
1.03970
Posttest
t-value
df
P(sig.2-tailed)
9.908
82
.000*
*Significant at P < 0 .05 (2-tailed)
An independent-samples t-test (as illustrated in Table 4.6) revealed a significant difference
between the experimental and control groups t(82) = 9.908, P < 0.05(Sig.2-tailed). The mean
posttest of the experimental group was significantly higher (M=68.1633, SD=7.77621) than the
mean posttest score of the control group (52.4798, SD = 6.65730). The effect size of the
difference in outcome on the Ennis-Weir Critical Thinking Essay Test was calculated at Cohen‟s
d = 2.16, indicating a very large effect that was statistically significant and also likely to be of
practical significance.
4.2.3. Effect of Explicit Instruction in Critical Thinking on Component Critical Thinking
performance and strength in the Ennis-Weir
Research Question and Related Hypothesis
The following research question and related null hypotheses were addressed in this section
RQ3: Will there be a significant difference in students‟ performance and strengths in different
components of critical thinking between the experimental group students who received training in
162
critical thinking techniques and the control group students who did not receive such training in
critical thinking in the Ennis-Weir Critical Thinking Essay Test?
The purpose of this question was to explore if integrating explicit instruction in critical thinking
can have a significant and predictable impact on component critical thinking performance and
provide insight regarding the strength of critical thinking skills in Ennis-Weir in students who
received training in an instructional technique and identify the areas of strengths and/or
weaknesses of the groups‟ scale scores between the experimental and control groups. Specifically,
it tests the following null hypothesis
Research Hypotheses
H03: There will be no significant difference in students‟ component critical thinking
performance and strength in critical thinking skills in Ennis-Weir CTET between experimental
and control groups.
H031: There will be no significant difference at the posttest of the experiment in
students‟ critical thinking performance in different components of CT in Ennis-Weir
CTET between Experimental and Control groups.
H032: There will be no difference in the students‟ strengths in CT skills in different
components of CT in Ennis-Weir CTET between the experimental and control groups.
4.2.3.1 Descriptive Statistics
The descriptive statistics (Table 4.7) and visual displays (Figs. 4.5 & 4.6) provide information
about the distribution of mean pretest and posttest scores on component critical thinking
performance skills (evaluated by assessing the entire student responses on the E-w). Inspection of
the statistical data discussed in the table below and the shape of the histograms suggested that the
scores on each of the variables were normally distributed (i.e., follow the shape of the normal
curve). The scores were reasonably normally distributed, with most scores occurring in the center,
tapering out towards the two extremes (Fig. 4.5) on the pretest scores of experimental and control
groups. The skew and kurtosis values, which are -3.20 and 0.534 respectively for these data is
within the accepted range (+1 to -1/+2 to -2) of normality. Scores on posttests, however, were
163
negatively skewed to the right (Fig. 5.6), indicating the clustering of scores at the high end (righthand side of a histogram).The kurtosis values were less peaked in the experimental group than the
distribution in the control group. Kurtosis was computed to be less than 1 (Kurtosis = 0.534)
indicating that the distribution appeared roughly symmetrical, following the shape of normal
curve and within the accepted range of normal bounds in both experimental and control groups.
Table 4.7. Distribution of the mean pretest and posttest scores (both scale and overall) in
five components of critical thinking performance in the Ennis-Weir Critical Thinking Essay
Test.
CTS
Group
E
Interpretation
C
E
Analysis
C
E
Evaluation
C
E
Inference
C
E
Explanation
C
Mean CTST Overall E
Score
C
Pretest
N
43
41
43
41
43
41
43
41
43
41
43
41
Mean
41.4419
42.7317
43.1163
42.8780
44.5814
43.6341
43.7674
43.9756
42.2326
43.4878
43.03
43.36
Posttest
SD
7.99197
6.20896
8.75380
6.97207
6.19997
6.72591
5.36229
4.96230
6.05466
5.90391
8.81599
6.21989
Mean
47.5349
44.6585
53.3953
45.2927
54.3023
45.6829
46.5814
45.2927
51.9535
43.9268
50.70
44.97
SD
11.90508
6.75503
11.51839
8.74712
10.70499
6.19047
9.22531
7.32203
9.02629
6.69847
6.55604
5.41234
Descriptive statistics (Table 4.7) also shows that pretest scores in both experimental and control
groups were similar. The results of descriptive statistics showed that the mean pretest of the
experimental group (both in scale and overall scores) closely resemble that of the mean pretest
scores observed in the control group (both in scale and overall scores). While not making as much
a gain as the experimental group, the control group also showed a considerable improvement from
pretest to posttest. There was an increase in the students‟ gains from pretest to posttest in both
groups (both in scale and overall scores). Compared with the mean pretest scores, a very different
picture emerged at the posttest. Posttest scores were significantly higher in the experimental
group than in the control (both in scale and overall scores).
164
Figure.4.5. Distribution of mean pretest overall scores on component critical thinking
performance on the Ennis-Weir Critical Chinking Essay Test.
Figure.4.6. Distribution of mean posttest total scores on component critical thinking
performance in Ennis-Weir Critical Thinking Essay Test.
4.2.3.2 Independent Samples T-Test
Following the descriptive statistics was an Independent-Samples T-Test that help the researcher
of this study determine if group means were differed significantly in posttest performance as a
result of the method of instruction (CTT). The t-values were examined for significance at alpha =
165
0.05 (2-tailed). Thus, the mean posttest (Table 4.8) scores of CTST between the experimental and
control groups are statistically summarized following the null hypothesis
3.1.2 Hypothesis Related to the Posttest of the Experiment
H031: There will be no significant differences in component critical thinking
performance at the posttest of the experiment between the experimental and control
groups.
Table 4.8.
Analysis of Independent Samples t-test for achievement in component critical
thinking skills (both scale and overall) between Experimental and Control Groups at the Posttest
of the experiment
Skill/Attribute
Name
Interpretation
Analysis
Evaluation
Inference
Explanation
Group
N
Mean
Std. Dev.
SE Mean
E
43
47.5349
11.90508
1.81551
C
E
41
43
44.6585
53.3953
6.75503
11.51839
1.05496
1.75654
C
E
41
43
45.2927
54.3023
8.74712
10.70499
1.36607
1.63250
C
E
41
43
45.6829
46.5814
6.19047
9.22531
.96679
1.40685
C
E
C
E
41
43
41
43
45.2927
51.9535
7.32203
9.02629
df
P(sig.2tailed)
1.353
82
.180
3.618
82
.001*
4.489
82
.000*
.707
82
.482
4.610
82
.000*
5.048
82
.000*
1.14351
1.37650
43.9268
50.70
6.69847
6.55604
1.04613
4.99893
44.97
5.41234
2.50038
CTST Total Mean
C
41
*Significant at p < 0.05 (2-tailed)
t-value
An independent-samples t-test(as illustrated in Table 4.8) indicated that the mean CTST total
scores of the experimental and control groups were found to be differed significantly t(82)=5.048,
P< 0.05(sig.2-tailed). The total CTST mean of the experimental group was significantly larger
(M=50.70, SD=6.55604) than the total CTST mean of the control group (M=44.97, SD=5.41234).
The effect size of the difference in outcome on component critical thinking performance (overall
166
score) on Ennis-Weir was calculated at Cohen‟s d= 0.95, indicating very large effect size that was
statistically significant.
The independent samples t-test was also calculated comparing the mean scores of experimental
group to the mean scores of control group in component critical thinking skills. The results of the
t-test indicated that analysis, evaluation, and explanation critical thinking skills were significantly
differed between experimental and control groups: Analysis t(82)=3.618, p< 0.05; Evaluation
t(82)=4.489, p < 0.05; Explanation t(82)=4.610, p < .05(sig.2-tailed). The mean posttest scores of
the control group were significantly lower in Analysis (M=45.2927, SD=8.74712); Evaluation
(M=45.6829, SD=6.19047); and Explanation (M=43.9268, SD=6.69847) than the mean posttest
scores of the experimental group (Analysis: M=53.3953, SD=11.51839; Evaluation: M=54.3023,
SD=10.70499; Explanation: M=51.9535, SD=9.02629) in these CT skill areas. However, no
statistically significant gains were observed in either group in Interpretation t(82)=1.353, p >
0.05, and Inference t(82)=.707, p > 0.05(sig.2-tailed) CT skill areas. The means of the
experimental group in Interpretation (M=47.5349, SD=11.90508) and Inference (M=46.5814,
SD=9.22531) were not significantly different from the means of the control group in
Interpretation (M=44.6585, SD=6.75503) and Inference (M=45.2927, SD=7.32203) critical
thinking skills.
3.1.3 Hypothesis Related to the strengths/weaknesses in CTS in the student
test-takers at the Posttest of the Experiment
H032: There will be no difference in the students‟ strengths/weaknesses of critical
thinking skills at the end of the training between the experimental and control groups
as measured by the five recommended performance assessment levels identified by the
authors of California Critical Thinking Skills Test (CCTST).
167
Table 4.10. Analysis of the Frequency distribution of student strengths/weaknesses in CTST
scale scores between the experimental (N=43) and control (N=41) groups at the posttest of
the experiment.
Skill/Attribute
Name
Grp
N
Below
50%
Not manifested
50 – 62%
Weak
63 – 69%
Moderate
70 – 78%
Interpretation
N (%)
E
43
31(72)
6(13.9)
2(4.7)
C
E
41
43
32(78)
17(39.3)
9(21.9)
17(39.6)
C
E
41
43
26(63.4)
14(32.3)
C
E
41
43
C
E
C
Analysis
N (%)
Evaluation
N (%)
Inference
N (%)
Explanation
N (%)
Strong
79-85%
Superior
86-100%
4(9.3)
------
------
-----4(9.4)
-----5(11.6)
-----____
-----____
15(36.5)
21(48.7)
-------4(9.4)
-------3(6.9)
-----1(2.3)
-----------
30(73.2)
26(60.4)
11(26.8)
17(39.6)
-----------
-----------
-----------
-----------
41
43
36(87.9)
20(46.5)
5(12.1)
18(42)
-----4(9.2)
-----1(2.3)
---------
---------
41
33(80.4)
8(19.4)
-----
------
------
------
Table 4.10. shows how many of the experimental (N=43) and control (N=41) group students fall
within each of the five recommended performance assessment levels identified by the California
Critical Thinking Skills Test writers (see chap. 4). Looking at the minimum and maximum scores
within each skill evaluated, one can see that experimental group students have more strengths in
four of the five reported scale areas such as interpretation 4(9.3%), analysis 5(11.6%),
evaluation 3(6.9%), and explanation 1(2.3%) than the control group students whose scores on
each of these scale indicate that the CT skills being measured was not manifested (very weak).
The recommended performance assessment levels for these CTS scale scores are moderate (70 to
78%) for the experimental group 13(30.1%) students. While there still be difficulties for students
with the inference CT skill whose score fell between 50 and 62 and was not manifested (very
weak) for almost all of the students in the Experimental group, the interpretation, analysis,
evaluation, and explanation scores, according to the assessment levels identified, fall between the
scores of 70 and 78 and are generally in the average range. There is also one individual test-taker
1(2.3%) of the experimental group (N=43) who has strength (81%) in evaluation. This score falls
between the scores (79 and 85) that the developers of the instrument have identified as strong in
critical thinking skills.
168
All test-takers (100%) in the control group (n=41) had scores that are very low (below 50%), not
manifested (fall between 50 and 62) indicating that this group was not able to manifest these skills
as a result of the traditional mode of instruction. While smaller number of test-takers 13(30.1%)
in the experimental group score in the Moderate (70 to 78), and 1(2.3) scores in the Strong (79 to
85) ranges, larger number of test-takers (67.6%) in the experimental group(n=43), however, have
scores that are very low (below 50%), not manifested (between 50 and 62%), or weak (between
63 and 69%), indicating that the higher number of individuals in the experimental group were still
not able to manifest these skills(or were not able to solve their reasoning problems) even after a
semester of instruction in CT techniques. Thus, by reference to the frequency distributions of CTS
scores (Table 4.10), one can infer that the scores of (70 to 78%) and (79 to 85%) imply that the
smaller number of students 14(32.4%) of this group (n= 43) are in the Moderate and Strong
ranges, referring that these students were able to solve their reasoning problems, or can make
judgments derived from quantitative reasoning in a variety of contexts.
Thus, by reference to the frequency distributions of CTS scores (Table 4.10), one can infer that
the scores of (70 to 78%) and (79 to 85%) imply that the smaller number of students 14(32.4%) of
this group (n= 43) are in the Moderate and Strong ranges, referring that these students were able
to solve their reasoning problems, or can make judgments derived from quantitative reasoning in
a variety of contexts.
4.2.4 Effect of Explicit Instruction in Critical Thinking on Student Achievement in
Dispositions toward Critical Thinking
Research Question and Related Hypothesis
The following research question and related hypotheses were addressed in this section
RQ4. Will a group of undergraduate academic writing students who receive explicit instruction
in critical thinking techniques differ in their dispositions toward the use of critical thinking skills
from a group of students not receiving explicit instruction in critical thinking techniques?
The purpose of this question was to determine if students who received explicit instruction in
critical thinking techniques differ in their attitudes or dispositions toward using their critical
thinking abilities from students who did not receive similar instruction after the experiment. The
California Critical Thinking Dispositions Inventory (CCTDI) instrument that includes a total of
169
75 items under 7 sub-scales of truth-seeking, open-mindedness, analyticity, systematicity, CT selfconfidence, CT inquisitiveness, and cognitive maturity was used to measure and determine how
strongly or poorly the groups disposed toward using critical thinking to perform activities that
challenge them. Specifically, it tests the following null hypothesis:
Research Hypothesis:
H04: There will be no significant difference in the mean Critical Thinking Disposition (both
scale and overall) scores as measured by CCTDI between students who received explicit
instruction in critical thinking instructional techniques and students who did not receive similar
training in critical thinking techniques (CTT).
4.2.4.1 Descriptive Statistics
The descriptive statistics (Table 4.11) and visual displays (Figs. 4.6 & 4.7) provided information
about the distribution of mean pretest and posttest scores for the dispositional aspects of critical
thinking (as measured by CCTDI). Inspection of the statistical data discussed in the table below
and the shape of the histograms suggested that the distributions of overall scores took the form of
a symmetric bell-shaped curve with the greatest frequency of scores in the middle. The scores
were reasonably normally distributed, with most scores occurring in the center, but slightly
tapering out towards the right (see Fig.4.6) on the pretest. The scores are slightly negatively
skewed (most cases are to the right to some extent) in both groups, but acceptable. The skew and
kurtosis values, which are -3.20 and 0.534 respectively for these data are within the accepted
range (+1 to -1/+2 to -2) of normality. Scores on posttest, however, were more negatively skewed
to the right (Fig.4.7) than scores on pretest, indicating the clustering of scores at the high end
(right-hand side of the histograms), but not that far from normal. The kurtosis values were less
peaked in the experimental group than the distribution in the control group. Kurtosis was
computed to be less than 1 (Kurtosis = 0.534) indicating that the distribution appeared roughly
symmetrical, following the shape of normal curve and within the accepted range of normal
bounds in both experimental and control groups.
170
Table 4.11. Distribution of Mean Pretest and Posttest Scores on Dispositions toward using
critical thinking Skills.
Pre-test Scores
Post-test Scores
Instrument
Group
N
Mean
SD
Mean
SD
CCTDI (Total)
Experimental
43
283.58
31.06079
302.30
23.5549
Control
41
283.46
25.44219
286.22
14.9306
Descriptive statistics (Table 4.11) also shows that pretest scores in both experimental and control
groups were similar (with only 0.12 mean difference). The results of descriptive statistics showed
that the mean pretest of the experimental group (M=283.58) closely resemble that of the mean
pretest scores observed in the control group (M=283.46). However, higher gains were reported at
the posttest than at the pretest of the experiment in both groups. While not making as much a gain
as the experimental group, the control group also showed a considerable improvement (increase)
from pretest to posttest (Experimental: pretest=283.58 to 302.30 (posttest); Control:
pretest=283.46 to 286.22(posttest)). There was an increase in the students‟ gains from pretest to
posttest in both groups.
Fig. 4.6. Distribution of mean pretest CTD total scores between experimental and control
groups
171
Fig.4.7. Distribution of mean posttest CTD overall scores between experimental and control
groups
4.2.4.2 Independent-Samples T-Test
An Independent-Samples T-Test was used to compare the means of the two independent samples
to help determine if group means significantly differed in posttest performance from each other as
a result of the method of instruction (CTTT). The t-values were examined for significance at
alpha = 0.05(2-tailed). Thus, the mean posttest (Table 4.12) scores of the CCTDI scales of the
Experimental and Control Groups are statistically summarized as follows:
4.1.2 Hypothesis Related to the Posttest of the Experiment
H041: There will be no significant difference in the mean critical thinking dispositions
(both overall and scale) scores at the posttest between the experimental and control
groups.
Table 4.12. Analysis of Independent-samples t-test for achievement in dispositions toward using
critical thinking between Experimental and Control Groups at the Posttest of the experiment.
172
Sub-scales
Group
N
Mean
Std. Dev.
SE Mean
Truth-seeking
E
43
44.3721
7.0373
1.0731
Open-mindedness
C
E
41
43
42.5122
42.2326
5.6529
4.7651
.8828
.7266
Analyticity
C
E
41
43
40.8293
42.3488
4.5820
5.4636
.7155
.8332
Systematicity
CT Self-confidence
CT Inquisitiveness
Cognitive Maturity
C
E
C
E
C
E
41
43
41
43
41
43
38.9268
48.2791
43.7317
41.1395
37.6341
45.1628
5.0615
4.4898
4.4945
2.1776
4.8618
5.0986
df
P(sig.2tailed)
1.331
82
.187
1.375
82
.173
2.974
82
.004*
4.638
82
.000*
4.298
82
.000*
3.589
82
.001*
.275
82
.784
3.717
82
.000*
.7904
.6846
.7019
.6846
.7019
.7775
C
E
41
43
41.2439
41.0930
4.8978
4.6486
.7649
.7089
C
E
41
43
41.3659
302.30
4.4483
23.5549
.6947
3.5920
286.22
14.9306
2.3317
CCTDI Total Score
C
41
*Significant at p < 0.05 (2-tailed)
t-value
An independent-samples t-test (Table 4.12) was calculated comparing the total CCTDI mean of
the experimental group to the total CCTDI mean of the control group. A statistically significant
difference was found t(82) = 3.717, P < .05(sig.2-tailed) between the two groups. The total
CCTDI mean of the experimental group was significantly larger (M = 302.30, SD = 23.5549) than
the total CCTDI mean of the control group (M = 286.22, SD = 14.9306). These total CCTDI
means fall between the total CTD scores the developers of the instrument identify as a positive
overall disposition ( > 280 ) toward critical thinking to strong overall disposition (= 350 or over)
toward CT. The effect size of the difference in outcome on CCTDI total score was calculated on
Cohen‟s d= 0.81, indicating large effect size that was statistically significant.
The mean posttest scores on some individual scales of CTD ( for example, truth-seeking, openmindedness, and cognitive maturity), however, did not show significant difference( p > 0.05 level,
at Sig.2-tailed) while the mean posttest scores on most individual scales of CTD (for instance,
173
Analyticity, Systematicity, Critical Thinking Self-confidence, and CT Inquisitiveness) were found
significantly different. The mean posttest scores of the experimental group were significantly
higher on Analyticity(M =42.3488, SD =5.46366), Systematicity (M = 48.2791, SD = 4.48981),
CT Self-Confidence (M = 41.1395, SD = 2.17761), and CT Inquisitiveness (M = 45.1628, SD =
5.09869) than the mean posttest scores of the control group, for instance, Analyticity (M =
38.9268, SD = 5.06157), Systematicity (M = 43.7317, SD = 4.49458), CT Self-Confidence (M =
37.6341, SD = 4.86187), and CT Inquisitiveness (M = 41.2439, SD = 4.89786). On these four
constructs, the P-value at Sig. 2-tailed is significantly less than 0.05 of alpha (i.e., P < 0.05 at sig.
2-tailed).
4.2.5. Correlations
RQ5: Will there be statistically significant positive correlations among achievements in
Academic Writing Skills, Critical Thinking Ability, and Dispositions toward Critical Thinking?
H05: There will be no significant positive correlations among achievements in Academic Writing
Skills, Critical Thinking Ability, and Dispositions toward Critical Thinking.
5. Relationships among achievement in Academic Writing Skills, Critical Thinking Ability, and
Dispositions toward Critical Thinking.
Table 4.13. Correlation Matrix for outcome Variables
AWST
AWST
Ennis-Weir
CCTDI
Pearson Correlation
Sig. (2-tailed)
N
Pearson Correlation
Sig. (2-tailed)
N
Pearson Correlation
Sig. (2-tailed)
N
1
……
43
.404**
.007
43
.225
.147
43
Ennis-Weir
.404**
.007
43
1
…...
43
.118
.449
43
CCTDI
.225
147
43
.118
.449
43
1
……
43
**. Correlation is significant at the 0.01 level (2-tailed).
The relationships among perceived critical thinking scores (as measured by the Ennis-Weir
Critical Thinking Essay Test), Academic Essay Writing skills scores (as measured by instructor
developed written argumentation test), and Critical Thinking Disposition scores(as measured by
174
CCTDI) were investigated using Pearson product-moment correlation coefficient. A correlation
matrix showing the relationships among the posttest scores on each of the three instruments is
displayed in Table 4.13). Each instrument showed a positive correlation with the other two
instruments, although the strength of those relationships varied. Scores on the Ennis-Weir Critical
Thinking Essay Test and Academic Writing Skills exam showed a moderate correlation, Pearson
r = .404, n = 43, p < .05 (i.e., .007 < .01). This means, high scores of Ennis-Weir Critical
Thinking ability test associated with high scores of Academic Essay Writing skills test.
Correlations between the other instruments were small (according to the guidelines of the value of
correlation coefficient suggested by Cohen, 1988, pp.79-81: small r = .10 to .29; medium r = .30
to .49; large r = .50 to 1.0). (see chapter 3: the methodology section for a detailed discussion).
Summary of Results
This
chapter
described
statistical
results
for
three
research
instruments:
(1)
the
Researcher/Instructor Developed Academic EFL Writing Test, (2) the Ennis-Weir Critical
Thinking Essay Test, and (3) the California Critical Thinking Dispositions Inventory (CCTDI)
(see Appendix F, I, and J). Statistically, significant differences were found between experimental
and control groups on posttest scores on the Academic Essay Writing Test and the Ennis-Weir
(general critical thinking performance assessment). Yet, two different results were found with
component critical thinking performance assessment. While statistically significant differences
were reported in five skills Overall scores, no significant differences were found in some
individual scale scores between the experimental and control groups. Though significant
differences were found on instruments testing critical thinking dispositions on total/overall scores,
results from the CCTDI indicated that there is no significant differences between experimental
and control group students in some individual dispositional aspects. Students in the experimental
group were better found using their critical thinking skills significantly, and they were able to
think of more uses of their critical thinking skills both in the academic and real-world contexts
than the students in the control group who did not receive training in Critical Thinking. Though
not strong, Modest to small relationships were also found among the test instruments.
175
CHAPTER V
Discussion of Findings, Conclusions, and Recommendations
5.0 Introduction
The primary underlying concern of this study was to investigate how higher-order cognitive
skills, such as critical thinking can best help students develop their critical thinking abilities and
results in an improved performance in educational contexts. More specifically, the purpose of this
study was to empirically assess the impact of integrating explicit instruction in critical thinking
on student achievement in writing academic essays, critical thinking ability, and critical thinking
dispositions.
This chapter discusses the results of the study as they relate to the research questions and
hypotheses (see Chapter I). Following a discussion of the findings for each of the stated questions
and hypotheses and a brief summary of conclusions, this chapter reviews the limitations of the
study, addresses possible implications for practice, and makes recommendations for areas of
future research.
5.1 Discussion of Results
5.1.1 Research Question One:
The purpose of this first question (see Chapter I Section 1.3) was to determine if integrating
explicit instruction in critical thinking results in improved performance in writing academic
essays/papers. This question was addressed to know if students who receive explicit instruction in
Critical Thinking Techniques perform better on a test that requires them to analyze, interpret and
write an argumentative essay than a group of similar students not receiving explicit instruction in
instructional techniques for critical thinking. To address this question, the researcher integrated
the critical thinking techniques that can be taught explicitly into an undergraduate academic EFL
essay writing course for the experimental group students by using critical thinking activities and
assignments that required students to use the techniques throughout the contents and activities of
academic writing course.
To test the effectiveness of the instruction, students in both groups were given an argumentation
essay writing test to develop an argument or claim about a topic and support it with logical and
176
valid reasons in order to persuade their readers to agree with their position. Scores on essay test
served as data for determining if students who taught to use the techniques were better able to
think critically and write thoughtfully (perform well) in their academic writing skills test than
students who were not trained to use the techniques. Data was then analyzed through descriptive
statistics and an independent-samples t-test using student scores on a test. The strength of the
difference between the two groups‟ means (mean= 70.4440 experimental, 57.9268 control)
suggests that this instructional techniques for critical thinking had an educationally and
statistically significant impact on students‟ abilities to perform thoughtful writing. The difference
was significant t(82)=6.679, p < 0.05(2-taild) and the effect size was very large(Cohen‟s d=1.46).
The prominent pedagogical implications in this result (study) also correspond with what the
following scholars believe in. Chaffee, McMahoon, and Stout, 2002; Worrell and ProfettoMcGrath, 2007 asserted that applying and using critical thinking techniques and activities with
different levels of language proficiency in English language classrooms can increase learners‟
level of thinking and simultaneously can help language learners promote their listening, speaking,
reading and writing abilities. Critical thinking techniques can equip learners with instruments
which help them to go beyond the linguistic factors, and to develop the art of language learning.
Research assumes that critical thinking in essay writing expands the learning experience and
makes the language learning more meaningful for the learners - a vehicle through which they can
gradually discover themselves in the process of language learning (Lipman, 2003).
The findings show that a critical thinking approach to learning could be an effective intervention
to enhance or promote students‟ essay writing abilities (see the mean posttest results, where the
experimental group significantly outperformed the control group students). Thus, the students‟
critical thinking and essay writing abilities were positively affected by infusing explicit
instruction in critical thinking into English essay writing classroom instruction. Mirman (1988)
and Scanlan(2006) as cited in this study (Chapter 2)suggest that critical thinking skills embedded
in the subject matter and woven into language education can directly lead to learning a language
better.
Qualitative findings also suggest the good linkages between critical thinking skills and academic
writing abilities. This fits with evidence of a correlation between critical thinking skills and a far
177
more psychologically complex type of writing that the Bereiter and Scardamalia (1985, 1987,
1989) called “knowledge transforming‖. Knowledge transforming necessitates thinking about an
issue, obtaining the information needed for analysis, and modifying one‟s thinking. This type of
writing leads writers to expand their knowledge base and develop new knowledge by processing
new information obtained for the purpose of writing on a topic. Knowledge transforming(or
producing academic writing, which requires obtaining and transforming knowledge) is
considerably more cognitively complex than knowledge telling(or telling what one already
knows) because writers do not merely retrieve information already available to them in memory,
but derive it from reading and integrate with that already available to become obtained
knowledge.
In relation to the effectiveness of integrating explicit instruction in critical thinking into a course
content, more specifically writing, the findings of this study support the results of earlier studies,
for example, by Clark (2014) which supports the assertion that infusing critical thinking into a
course content using an integrated writing approach can have a significant and predictable impact
on both critical thinking skills performance and better student academic success.
In relation to the effectiveness of explicit instruction in critical thinking on academic subject, the
findings of this study also support the results of earlier study by Coughlin (2010). In this regard,
Coughlin (2010) concluded that research on 21st century skills reveals that student success is more
related to critical thinking than traditional core subject matter (p. 50).
5.1.2 Research Question Two:
While thinking smart in an academic discipline is important for college level students, of greater
concern to many people is whether students transfer the skills they learn in academic settings to
real world problems. The second research question addressed this issue. As indicated in the
section on research question one, the researcher integrated explicit instruction in critical thinking
into academic EFL writing skills course for the experimental section. While the subject matter
that students thought about in this study was Academic Writing Skills, the elements and standards
of reasoning are universal and applicable to any subject matter. The purpose of this question was
to examine if students who were exposed to explicit instruction in critical thinking perform better
on critical thinking ability test than a group of EFL undergraduate students who did not receive
178
explicit instruction in critical thinking. The Ennis-Weir is primarily a test of critical thinking
ability, not writing ability. One should focus on the quality of thinking in the written responses,
rather than on mode of expression. One should understand what the examinee has written and
whether it does or does not satisfy the criteria (provided in the rubric by Ennis & Weir, 1985;
Ennis, 2005). Thus, this second research question was to examine the quality of general critical
thinking ability in the students‟ written responses in some major areas of critical thinking
competence that the test covers.
To test students‟ abilities to apply the critical thinking skills acquired through reasoning to
everyday reasoning tasks, students in both experimental and control groups took the Ennis-Weir
Critical Thinking Essay Test during the first and the last weeks of the course. The Ennis-Weir is
presented as a letter to the editor on a parking problem faced by a small town. Students were
asked to respond to each argument made by the concerned citizen writing the letter and finally to
assess whether the letter as a whole provides adequate support for the author‟s proposed solution.
Results on the Ennis-Weir showed that students in the experimental group performed at a
statistically significantly higher level than students in the control group t(82) = 9.908, (p < .05),
and findings indicated a very large effect size (Cohen‟s d = 2.16). While pretest means were
nearly similar, posttest means increased by 26.54 points in the experimental and by 9.67 in the
control group on the Ennis-Weir CTET. While not making as much a gain as the experimental
group, the control group also showed a statistically significant improvement from pretest to
posttest. Significant pretest to posttest gains were observed in both groups. But students who
received training in critical thinking techniques performed better on a task requiring evaluation of
written arguments on a contemporary issue than a group of similar students who did not receive
explicit instruction in critical thinking.
While an increase in scores both in experimental and control groups from pretest to posttest is not
unexpected phenomenon, such an increase in the mean scores of the control group (42.81 to
52.48) from pretest to posttest at a statistically significant level is unexpected, but encouraging. A
possible explanation for this is that either the traditional implicit instruction used in the general
education academic writing courses did help students develop critical thinking skills, or as studies
indicate students‟ prior critical thinking skill significantly influenced critical thinking
179
performance (Quitadamo, Brahler, and Crounch, unpublished results, cited in quitadamo and
Kurtz, 2007). Specifically, students with the highest prior critical thinking skills showed the
largest performance gains, whereas students with low initial skill were at a comparative
disadvantage. The fact that prior critical thinking skill also had a large effect on critical thinking
performance in this study increases the generalizability of the observation and underscores its
importance. Simply put, students who have not been explicitly taught how to think critically may
not reach the same potential as peers who have been taught these skills, not because they lack the
cognitive hard-wiring to perform but because they lack the tools to build their knowledge.
This is true from the fact that, even if it was not statistically significant, the control group showed
a slightly higher gains in the mean pretest scores than the experimental group in Ennis-Weir
probably reflects the fact that the control group also participated in activities requiring critical
thinking prior to the intervention. Most researchers working in the area of critical thinking (see
Chapter 2: Literature Review) agree on the important role of background knowledge. In
particular, most researchers see background knowledge as essential if students are to demonstrate
their critical thinking skills (Case, 2005; Kennedy et al., 1991; Willingham, 2007). As
McPeck(1990) has noted, to think critically, students need something to think critically about.
However, Ennis (1989) and most researchers view background knowledge as necessary but not
sufficient for critical thinking.
Another possible explanation for this significant gain in critical thinking from pretest to posttest
in the control group in this study may be the use of academic writing. Writing(not identified
which type of writing), it was found and argued by many researchers, for example, wade(1995);
Clark(2014); Quitadamo and Kurtz(2007) to be the best medium for students to express their
critical thinking and show that critical thinking is most effectively taught when critical thinking
and composition are taught in an integrated manner. Wade (1995) also found that writing
(unidentified type of writing) promotes greater self-reflection and depth of logic compared to oral
communication and may be the best medium for students to express their critical thinking skills.
However, the significant increase in scores in a comparison group (control group, in this case)
found in this research study deserves further consideration and future studies that analyze student
essays in more detail would provide greater insight into how writing influences Critical Thinking
skill.
180
As mentioned earlier, results on the Ennis-Weir showed that students in the experimental group
performed at a statistically significantly higher level on the posttest than students in the control
group t(82) = 9.908, p < 0.05(2-tailed), and findings indicated a very large effect size(Cohen‟s
d=2.16). By way of comparison, the experimental group‟s mean increase is slightly smaller (by
8.09 points) than the increase found in the pilot study in 2013, across two sections of civil
engineering students (n = 90) receiving Academic EFL Writing Skills course. Despite students‟
scores that showed a mean decrease of 8.09 points on the Ennis-Weir, the findings of the present
study was consistent with results from preliminary studies at the pilot test, indicating statistically
significant difference between experimental and control groups.
The results of this study is similar to the results of the earlier study by Reed (1998) in U.S. on a
group of Community College History students (n=52) in which the experimental group performed
at a statistically significantly higher level on the posttest on the Ennis-Weir than students in the
control group. This study is also contrasted with that of Reed‟s in that the posttest mean of this
study increased by 9.67 but decreased by 2.63 points in the control group of her study on the
Ennis-Weir. A possible explanation for this is that, while the subject matter that students taught
about in this study was academic writing, both experimental and control groups were shown to
rely less on rote learning than history subject taught by Reed. Or these results on the Ennis-Weir
provided this researcher with an opportunity to hypothesize a close connection between critical
thinking abilities and academic writing skills in addition to the instructional methods (critical
thinking teaching techniques) and materials used might lead to this difference. However, further
study will be needed.
These results can also be compared with findings in a study at Baker University(Hatcher, 1995) in
which freshmen who completed a two semester sequence in English Composition and Critical
Thinking between 1991-1998(n= 977) averaged an increase in mean scores of 5.3 on the EnnisWeir. It is important to note that the Baker University study also included a comparison group,
students in a course in introductory logic at a state university, who showed a mean decrease of 1.4
points in the control group on the Ennis-Weir.
181
Along with the correlational and experimental research findings in this study, experimental and
case study researches by Hatcher, 1999; Tsui, 2002; Hatcher, 2006; Quitadamo and Kurtz, 2007
show that critical thinking is most effectively taught when critical thinking and composition are
taught in an integrated manner. Quitadamo and Kurtz (2007) examined this specific question
experimentally in a General Education Biology class at a university about twice the size of
USMA (United States Military Academy). They showed that students who completed written
assessments had significantly higher gains in critical thinking than those who completed only
multiple choice assessments.
The findings of the present study, the data from published assessment research and end of course
survey data suggested that we should tackle the challenges of enhancing critical thinking using
some form of writing. The challenge that remained, however, was that it was not at all clear how
we could implement critical thinking through writing. As previously mentioned and even in
Robert Ennis‟s “streamlined conception of critical thinking” (1991), critical thinking is a complex
skill, or rather set of skills. Therefore, any writing approach to enhance critical thinking certainly
could not be accomplished by adding only one or two new writing assignments to the course, but
tackling this issue would require a new overarching and integrated pedagogical approach that was
specifically focused on analytical writing (Clark, 2014). This finding, therefore, reinforces the
importance of providing explicit instruction in critical thinking rather than simply viewing critical
thinking as an implicit goal of a course.
5.1.3 Research Question Three:
The purpose of this question was to find out if integrating explicit instruction in critical thinking
can have a significant and predictable impact on component critical thinking performance and
provide insight regarding the strength of critical thinking skills in students who received training
in critical thinking techniques and identify the areas of strengths and weaknesses of the groups‟
scale scores between the experimental and control groups. Scale scores, according to the
California Academic Press (2013), are important for identifying areas of strength and weakness.
Scale scores can also give direction to teachers and institutions to the development of programs to
help students improve their critical thinking skills. For example, if the group is relatively weak in
one or more skill areas (Interpretation, Analysis, Evaluation, Inference, Self-regulation,
Explanation, Inductive or Deductive Reasoning skills), novel scenarios, case studies, or group
182
problem-solving exercises can be designed to examine the CCTST scale scores to see where this
group was particularly weak and where they were strong and emphasize and practice those skills
where weaknesses were being found (pp. 40-42).
As was discussed in the previous two sections, Critical Thinking Techniques were explicitly
taught in the experimental section, and students had numerous opportunities to practice these
instructional techniques. Focused and integrated application of the Delphi core reasoning skills
including interpretation, analysis, evaluation, inference, explanation, self-regulation were the
major concerns of this study. Thus, these six critical thinking skills were emphasized throughout
the course of the semester, although perhaps some were not found in the instructional model of
the Delphi Study Report (for instance, inductive or deductive). The explicit instruction in critical
thinking should also attended to the dispositional or affective components of critical thinking.
However, they were not given equal emphasis except for some situations in between discussions
when students were demotivated, in that unmotivated individuals are unlikely to exhibit critical
thinking (Paul, 1992, p.13).
This question was built on the information presented in the Ennis-Weir Critical Thinking Essay
Instrument. The Ennis-Weir is a general (students‟ abilities to think critically about an everyday
reasoning task) test of critical thinking ability in the context of argumentation. Along a slightly
different line, the focus of this third question is to examine how well students use and demonstrate
the skills of critical thinking (interpretation, analysis, evaluation, inference, explanation, and selfregulation) to discuss and critique the article in their written responses. Specifically, it examined
students‟ abilities to: (1) understand the particular problem in the letter (interpretation), (2)
identify central arguments(analysis), and (3) assess the credibility of these arguments using the
logic in the previous steps to decide what to do or believe(evaluation), (4) understand the
consequences of that decision(inference), (5) communicate the process of one‟s own thinking
clearly, concisely, accurately and deeply to others(explanation), and (6) engage in an introspective
process of evaluating one‟s own thinking and remaining open to changing one‟s own beliefs and
opinions(self-regulation) were assessed from their written responses on the Ennis-Weir Critical
Thinking Essay instrument.
183
The analysis of descriptive statistics (Table 4.7) showed that pretest scores in both experimental
and control groups were similar. The mean pretest of the experimental group (both in scale and
overall scores) closely resemble that of the mean pretest scores observed in the control group
(both in scale and overall scores). While not making as much a gain as the experimental group,
the control group also showed a considerable improvement from pretest to posttest. There was an
increase in the students‟ gains from pretest to posttest in both groups (both in scale and overall
scores). The largest gain, however, was noted in the experimental group than in the control.
Although the control group had improved its scores and made a remarkable change from pretest
to posttest, it was not as substantial as that of the experimental group. Thus, the substantial growth
in performance in critical thinking as demonstrated by the experimental group at the end of the
experiment could be a positive and a significant sign that the instructional techniques used to
teach critical thinking can help students to learn and develop the component skills of critical
thinking.
The analysis of descriptive statistics and independent samples t-test (Tables 4.7 and 4.8) indicated
that, while pretest means were similar (i.e., no significant difference between experimental and
control groups), the posttest means of the reasoning skills Overall Score differed significantly
between the experimental and control groups t(82)=5.048, p < 0.05. The mean of the experimental
group was significantly higher (M=50.70) than the mean of the control group (M=44.97), the
finding also indicated a large effect size (Cohen‟s d= 0.95). By way of comparison, however, the
experimental group‟s mean increase is slightly smaller than the mean increase found in the
preliminary study in 2013, across two sections of students in the academic writing skills course. A
possible explanation for this is that students in the pilot test might have a better critical thinking
background than students in the current main study.
All six critical thinking skills were emphasized and were also practiced throughout the course of
the semester, although some (for instance, analysis, evaluation, and explanation) were
significantly improved more than others (e.g., interpretation, inference). Significant gains in
analysis, evaluation, and explanation are, however, encouraging while the effectiveness of explicit
instruction in CT on interpretation and inference critical thinking skills did not appear to differ
significantly between experimental and control groups(a possible explanation may be students
inadequacy to interpret and understand the question being asked or may be language
184
comprehension issues and draw conclusions from reasons and evidence); a possible explanation
for this is that these skills may be „new‟ to students; and was the first course that asked them to
critically evaluate and identify the logic of the authors they read (rather than accepting the article
as „truth‟)and enable students to making a final decision about what to believe or what to do and
describe/explain the evidence and reasons of their thinking processes. In addition, these three
critical thinking skills may have been less intuitive and thus easier to learn, and/or easier to
demonstrate (and therefore score) in an essay format. Thus, the newness of analysis, evaluation
and explanation may have contributed to a larger gain in these skills.
The findings of this study support the results of earlier studies in two areas. The results of this
study is similar to Quitadamo and Kurtz(2007) and Hofreiter(2005) in that analysis and
evaluation critical thinking skills increased significantly in the critical thinking group than noncritical thinking group (Hofreiter ) and writing group than non-writing group(Quitadamo and
Kurtz). It is contrasted, however, that this study is resulted in a significant gain in explanation
skill while Hofreiter(2005) and Quitadamo and Kurtz(2007) have reported a significant gain in
self-regulation and inference critical thinking skills. Three of these studies have come with three
different significant gains in these three critical thinking skills (explanation, self-regulation, and
inference). So future researches should be needed
The analysis of component skills provided greater insight into the particular critical thinking skills
that students changed in response to writing. Specifically, critical thinking students significantly
improved their analysis, evaluation, and explanation critical thinking skills whereas non-critical
thinking students did not. Although not statistically significantly different from the control group,
critical thinking students also improved their inference and interpretation skills much more than
non-critical thinking students. These results indicate that the process of writing helps students
develop their analytical, evaluation, and explanation skills than interpretation and inference
critical thinking skills. Prior research indicates that the writing to learn strategy is effective
because students must conceptually organize and structure their thoughts as well as their
awareness of thinking processes (Langer and Applebee, 1987; Ackerman, 1993; Holliday, 1994;
Rivard, 1994; Quitadamo and Kurtz, 2007). More specifically, as students begin to shape their
thoughts at the point of construction, they continually analyze, review, and clarify meaning
through the processes of drafting and revising, they necessarily engage and apply analysis and
185
inference skills (Klein, 1999; Hand and Prain, 2002). In this study, the process of writing appears
to have influenced critical thinking gains. It also seems likely that writing students experienced a
greater cognitive demand than non-writing students simply because the writing act required them
to hypothesize, debate, and persuade (Rivard, 1994; Hand and Prain, 2002) rather than memorize
as was the case in non-critical thinking control courses.
Conversely, the lack of any significant change in interpretation and inference CT skills in the CT
groups and interpretation, analysis, evaluation, inference and explanation critical thinking skills
in the non-critical thinking group students indicated that the traditional implicit instruction used in
the general education academic writing courses did not help students develop critical thinking
skills. Based on the results of this study, it could be argued that traditional academic writing
instruction actually prevents the development of critical thinking skills, which presents a rather
large problem when one considers how frequently these traditional methods were used in general
education academic writing courses. One also has to consider that the critical thinking gains seen
in the CT group might also have resulted from the relative absence of traditional Academic
Writing instruction rather than CT alone. Additional research will be necessary to gain further
insight into this question. Either way, changes to the traditional model of Academic Writing
instruction will be necessary if the goal is to enhance the critical thinking abilities of general
education in Academic Writing students.
Even though the experimental group‟s Overall mean score of 50.70 were significantly higher than
that of its selected comparison group (M=44.97), by reference to the frequency distributions of
CTST scale scores (Table 4.10), one can infer that the scores of the smaller number of students
13(30.1%) of this group (n= 43) are in the Moderate (70 to 78) range referring that very small
number of students were able to solve their reasoning problems, or can make judgments derived
from quantitative reasoning in a variety of contexts.
The interpretation of the CT overall and scale scores was made in accordance with the scoring
profile developed by Insight Assessment 2013 as described in Table 3.1, Chapter 3 (Methodology
Section). Thus, Comparing the CCTST Overall Scores of the frequency distributions in Table
4.10 to the recommended performance assessment level (Table 3.1, Chapter 3), one can infer that
15(33.9%) of students did not manifest(50 to 62) critical thinking skill, indicating that this result,
186
according to the CCTST Test Manual(2013) Insight Assessment, is consistent with possible
insufficient test-taker effort, cognitive fatigue, or possible reading or language comprehension
issues; 14(32.7%) of students displayed Weak(63 to 69%) overall skill, referring that this result is
predictive of difficulties with educational and employment related demands for reflective problem
solving and reflective decision making; 13(31.1%)of students fell into the Moderate(70 to 78),
indicating that this result suggests the potential for skills-related challenges when engaged in
reflective problem-solving and reflective decision-making associated with learning or employee
development; 1(2.3%) showed Strong(79 to 85%) overall skill, indicating that this result is
consistent with the potential for academic success and career development, and no one of the total
number of students(0 per cent) displayed Superior(86 to 100) critical thinking skill overall. The
recommended performance assessments of the individual CCTST Overall Scores allows the
observation that, with the exception of one individual (2.3%) who is exceptionally strong in
critical thinking overall, and 13(31.1%) students who demonstrate that they have generally
moderate skills, referring that these students were able to solve their reasoning problems, or can
make judgments derived from quantitative reasoning in a variety of contexts. The larger number
of students (66.4%), however, were not able to manifest their skills (indicating that this result is
consistent with possible insufficient test-taker effort, cognitive fatigue, or possible reading or
language comprehension issues even after a semester long instruction in CT.
Experimental group students have more strengths in four of the five reported scale areas such as
interpretation 4(9.3%), analysis 5(11.6%), evaluation 3(6.9%), and explanation 1(2.3%) than the
control group students whose scores on each of these scale indicate that the CT skills being
measured was not manifested (very weak). The recommended performance assessment levels for
these CTS scale scores are moderate (70 to 78%) for the experimental group 13(30.1%) students.
5.1.4 Research Question Four :
One aspect of critical thinking that increasingly appears as an integral part of various models for
critical thinking including the Delphi Study Report and Richard Paul‟s model is a person‟s
“critical spirit,” or “habit of mind,” or general dispositions toward using critical thinking. While
an individual may possess skills needed for good reasoning, he or she may not choose to use them
or may use them in a self-serving way. Conversely, many theoreticians maintain that a person
who is adept at critical thinking would be disposed toward using critical thinking in his or her
187
personal, professional, and civic affairs (Reed, 1998). This fourth research question was
addressed to know whether or not students who were trained in critical thinking techniques show
improvement in their dispositions toward critical thinking over the course of a semester.
As was described in the previous two sections, the Delphi Report instructional techniques for
critical thinking were explicitly taught in the experimental section, and students had numerous
opportunities to practice these techniques. The critical thinking skills, and the elements of
reasoning (see Paul‟s model) were emphasized most explicitly and frequently, followed by the
standards. The intellectual traits of a critical thinker, the aspects of Paul‟s model most closely
related to critical thinking dispositions, however, were not given emphasis except for some
introduction and discussions in between Critical Thinking Skills.
To determine students‟ dispositions toward critical thinking, students in both groups took the
California Critical Thinking Dispositions Inventory (CCTDI) both at pre-and-post-instructions of
the course. Results from statistical analyses of the scores (both overall and individual scale
scores) on this instrument showed mixed mean gains to suggest significant growth between the
experimental and control groups. No differences(both overall and scale scores) were found
between the two groups at the pretest of the experiment (CCTDI Total: Experimental M=283.58,
Control M=283.46); and CCTDI Scale Scores(Truth-seeking: Experimental M=43.2093, Control
M=41.3171; Open-mindedness: Experimental M=39.5581, Control M=41.1220; Analyticity:
Experimental M=40.3721, Control M=40.5122; Systematicity: Experimental M=41.4651, Control
M=42.2439;
CT self-confidence:
Inquisitiveness:
Experimental M=38.7209, Control
Experimental M=39.7209, Control
M=37.9757;
M=39.8293; Cognitive
CT
Maturity:
Experimental M=42.3023, Control M=40.4634).
However, the posttest means of the experimental group were significantly different from the
posttest means of the control group on the CCTDI overall scores (Experimental M = 302.30, and
Control M = 286.22 ), but the mean posttest scale scores were too small for truth-seeking
(EG=44.3721, CG=42.5122); open-mindedness (EG=42.2326, CG=40.8293), and cognitive
maturity (EG=41.0930, CG=41.3659) to show any significant differences while there is good
evidence to show significant growth in the mean CT disposition toward analyticity (EG =42.3488,
CG=38.9268); systematicity (EG=48.2791, CG=43.7317); CT self-confidence (EG=41.1395,
188
CG=37.6341), and CT inquisitiveness (EG=45.1628, CG=41.2439) indicating that the
experimental group scored significantly higher than the control group in these dispositional
aspects.
The interpretation of the CCTDI overall and scale scores was made in accordance with the
scoring profile developed by Facione and Facione, 1997 as described in chapter 3 (Methodology
Section). Thus, these total means (283.58 and 283.46) at the pretest fall between the scores that
the developers of the instrument identify as relative weak to strong (280, 350) total score of
disposition toward critical thinking. Though the total CCTDI scores indicate a positive overall
disposition toward critical thinking (> 280), these scores did not show any significant difference
in the total CCTDI scores or on any one of the individual scales between the experimental and
control groups (P > 0.05, at Sig. 2-tailed).
According to Facione and Facione‟s(1997) scoring profile, the pretest mean scores indicate that
both the experimental and control groups show positive dispositions ( > 40 ) toward truthseeking, analyticity, systematicity and cognitive maturity, and ambivalent dispositions(31-39)
toward open-mindedness, CT Self-confidence, and CT inquisitiveness. These mean scores are
examined to have a striking similarity between the experimental and control groups indicating
that there is no significant difference between the groups except that some are positively disposed
(particularly keen to use their critical thinking skills with the challenges they face) and
ambivalently disposed (they are not keen to using their critical thinking skills at the time of
challenge).
The experimental group students in this study are positively disposed to both overall CTD ( >280
) and CTD scale ( > 40 ) scores. This is encouraging as the results suggest that they are keen to
use their critical thinking skills when a situation calls for it. The control group students can be
considered as positively disposed (>280) to the overall and ambivalently disposed (31-39) to
analyticity and CT Self-confidence.
Although an overall improvement in critical thinking disposition at the posttest is encouraging,
the mean posttest scores on some individual scales of CTD (for example, truth-seeking, openmindedness, and cognitive maturity), did not show significant difference (p > 0.05 level, at Sig.2189
tailed). This similarity in mean scores in this study indicates that exposure to the educational
experience, whether it is in the form of explicit instruction in critical thinking, or traditional
method of teaching does not appear to have made these students learn truth-seeking(more eager to
ask challenging questions or more objective in search for knowledge and truth); openmindedness(act with tolerance toward the opinions of others, and approach issues from different
perspectives); cognitive maturity(to have the tendency to see problems as complex, rather than
black-and white thinking, the habit of making judgments in a timely way, not prematurely). This
is certainly not a comforting news and thought for both the researcher and those involved in
educating these students as one would hope that the dispositions toward truth-seeking, openmindedness, and cognitive maturity would be promoted through the educational process
(integrating Explicit instruction in CT or the traditional approach as is indicated by the findings
of this study).
As CCTDI is relatively new in our context (a number of researches have been examined by the
researcher), no studies are currently available for comparison purposes locally. Students‟ mean
scores in the current study (both overall and scale scores) were thus compared to a preliminary
(pilot test) scores. The results of this study were consistent (agree with) results from preliminary
studies (see Chapter 4-methodolgy section). Findings of this present study revealed that there was
a statistically significant difference in an overall CTD (Experimental M=302.30, Control
M=286.22) between experimental and control groups. This does mean that a semester- long
course in instructional techniques has an effect on students‟ dispositions toward critical thinking
(though instruction in CT was not given emphasis to this variable). The mean total score of
302.30 in this study is lower than Facione and Facione‟s (1997) 3rd years (M=308), Licensed
students (M=317), and larger than 1st year students(M=298.6). This total CTD is also contrasted
with Reed‟s (1998) findings(Pretest: M=303.35, and Posttest: M=303.90) that showed no
significant difference between the pretest and posttest overall CCTDI scores (in a single group) of
the experiment providing that explicit instruction in Paul‟s model into history course has no effect
on her students‟ dispositions toward critical thinking. These means are also consistent with
findings in this researcher‟s pilot study in 2013 (Posttest: Experimental M=303.59, and Control
M=300.76).
190
Despite some variations in total CCTDI mean scores in many studies surveyed, all total means for
CCTDI revealed that students who received explicit instruction in CT were positively disposed
toward critical thinking indicating that the overall CCTDI scores (>280) fall between the scores
the developers of the instrument identify as relatively weak to strong (280,350) dispositions
toward CT. This is contrasted with Taha (2003) who found the majority of his undergraduate
nursing students‟ ambivalent dispositions toward overall CCTDI.
The mean posttest scores on most individual scales of CTD (for instance, Analyticity,
Systematicity, Critical Thinking Self-confidence, and CT Inquisitiveness) were found
significantly different between the experimental and control groups in this study. This is agreed
with Tiwari(1998) who found a significant difference for these dispositional characteristics except
CT inquisitiveness. The findings of this study is also in consistent with Taha(2003) who found
significant differences in these CTD scales except CT self-confidence.
Findings also indicate truth-seeking the most difficult disposition to develop (Facione et al 1997;
Rimiene, 2002). Ambivalence toward truth-seeking is common (Profetto-McGrath et al 2003;
Coloccielo, 1997; Facione, 1995). Qualitative findings identified work culture, hierarchy,
“traditional thought”, and lack of confidence to question as barriers. Findings also suggest that the
use of reasoning skills and evidence is not apparent in practice; it may even be at odds with
professional socialization (Duchscher, 2003). Other barriers identified include personal,
professional and political risk associated with CT, and issues relating to power relations; this
resonates with Brookfield‟s (1993) work on suicide. Learners could benefit from examining
social, cultural and Psychological factors that support “traditional” forms of knowledge and
Truth-seeking.
Qualitative findings suggest that research appraisal skills support CTD development. This fits
with Profetto-McGrath et al‟s (2003) linkage of CTD and research utilization and Thompson and
Rebesch‟s (1999) suggestion that literature review develops truth-seeking and systematicity.
Critical Thinking tools e.g., thinking, reflection, and appraisal frameworks may enhance abilities
and consequently dispositions.
191
The discussion so far has highlighted some of the similarities between the findings of this study
and those of earlier ones. In addition, some noticeable differences have also been observed. It is
not surprising, however, that some dispositional aspects of the critical thinking did not
significantly increase over the course of the semester, as these were not the focus of instruction.
Although the behavior of thinking critically may act to change a students‟ disposition, one
assumes this change happens gradually and may not be detected in one semester. Given more
time, a change might be detectable; literature does suggest that cognitive maturity is linked to
developmental growth and that students may naturally improve in this disposition as a result of
attending college (Lai, 2005).
5.1.5. Relationships among Achievement on the three instruments
Three instruments were used as outcome measures in this study: the Instructor Developed
academic argumentation essay writing test, the Ennis-Weir Critical Thinking Essay Test, and the
California Critical Thinking Dispositions Inventory. These three instruments were used to test for
three different types of outcomes anticipated as a result of course materials and instructional
methods. Thus, the academic essay writing test was intended to test for students‟ ability to
analyze, interpret and write thoughtfully academic papers; the Ennis-Weir is discipline neutral
and tests for general reasoning abilities; the CCTDI is also discipline neutral and is designed to
test for beliefs and attitudes that dispose one toward critical thinking.
Relationships do, however, exist among the outcomes of the instruments, and correlation analysis
was conducted to determine the strength and direction of these relationships. Findings in this
study indicate that each instrument was positively related to each of the other two instruments, but
the strength of that relationship varied. The strongest relationship(r= .404) was found between the
academic writing and the Ennis-Weir general critical thinking ability exam. This means that 16%
amount of shared variance was found between the two variables, representing a moderate effect.
This moderate relationship might be expected since the ability to write thoughtfully requires
students to evaluate, analyze, interpret and synthesize information critically that focus on just the
core, academic survival-level skills students need to be successful in their university work. These
two instruments can also be used to measure critical thinking abilities. The ability to thinking
critically is probably the major factor underlying the relationships between achievement on these
two instrument.
192
The relationship between the academic essay writing and CCTDI was (r = .225) indicating a
small, positive relationship with 5% amount of shared variance that represents small effect
between the two variables. Each of these two instruments in one way or another relate to the
general abilities to think critically.
The Ennis-Weir and the CCTDI also have a small, positive relationship(r= .118) with a 1.39 per
cent shared variance between them. Thus, this 1.39% of the variability represents a small effect.
Each of these two instruments relate to a major component of critical thinking abilities – the
Ennis-Weir tests mainly for the reasoning skills and the CCTDI for critical thinking dispositions.
Experts find that having dispositions toward critical thinking is as crucial to being considered a
good critical thinker as is the possession of requisite cognitive skills.
The lack of the strength of the relationship between two of the instruments in this study may
indicate the absence of special training in which the trainer received before the practice of
developing the instructional material and methods to provide explicit instruction in critical
thinking. The researcher also found important that interventions in which educators received
special training in teaching critical thinking had the largest effect-sizes, compared to studies in
which course curricula were simply aligned to critical thinking standards or critical thinking was
simply included as an instructional objective. Thus, successful interventions may require
professional development for teachers specifically focused on teaching critical thinking (Abrami
et al., 2008).
5.2. Summary of Findings
The desire to empirically examine the effectiveness of explicit instruction in critical thinking in
promoting students‟ critical thinking and academic essay writing skills led this researcher to the
present study. The question at the heart of the study was – to what extent is explicit instruction in
critical thinking responsible to enhancing students‟ critical thinking abilities and promoting
students‟ learning to write academic essays in the EFL situations/contexts. The main objective of
this study was to test the impact of the instruction on three outcome measures.
193
A two-group quasi-experimental pretest/posttest design was employed for 43 students in an
experimental group who were explicitly taught to use critical thinking instructional techniques
and critical thinking classroom support materials(integrated into academic writing course
material), and 41 students in a control group who were taught in the traditional lecture method.
Prior to the design, implementation and evaluation of the experiment, a literature review was
carried out. The literature review examined the theory and the practice of learning critical
thinking both in general and EFL education.
The major findings of this study are, thus, summarized as follows:

The findings show that integrating explicit instruction in critical thinking into an
academic writing course material resulted in improved students‟ academic writing
skills.

The findings also show that explicit training in critical thinking does appear to play a
significant role in developing better college students‟ general reasoning abilities (or
general critical thinking abilities).

The findings of this study indicate that explicit instruction in critical thinking had
significant effect on students‟ core reasoning skills in general, and analysis, evaluation
and explanation critical thinking skills in particular. However, it was also noted that
explicit instruction in critical thinking techniques made no significant impact on
interpretation and inference critical thinking skills.

The findings of this study show that experimental group students have more strengths
in four of the five reported scale areas such as interpretation, analysis, evaluation, and
explanation than the control group students whose scores on each of these scale
indicate that the critical thinking skills being measured was not manifested (very
weak). The recommended performance assessment levels for these component critical
thinking scale scores are moderate (70 to 78%) for the experimental group 13(30.1%)
students.

In relation to CTD(beliefs and attitudes that dispose one toward critical thinking), the
findings show that the critical thinking approach to learning was found to have had a
significant effect on promoting students‟ Overall dispositions, and some individual
dispositional
aspects
(analyticity,
systematicity,
CT
self-confidence,
CT
inquisitiveness). However, the critical thinking approach to learning was found to have
194
had no significant effect on three dispositional aspects – truth-seeking, openmindedness, and cognitive maturity in this study.

Though the strength of the relationship among the outcome instruments varied,
findings in this study indicate that each instrument was positively related to each of the
other two instruments.
5.3. Implications for Practice
This study was conducted in a naturalistic educational setting with many of the variables typically
found in a university undergraduate course, including a regular faculty member with a heavy
teaching/learning schedule and students who initially enroll and then later drop a course for a
variety of reasons. Despite all these challenges, findings revealed large effect sizes on instruments
testing students‟ learning to write thoughtful academic essays and general critical thinking skills.
Finding practical and significant results on such instruments, indicating that explicit instruction in
critical thinking(using instructional techniques) can improve both students‟ abilities to think
within a discipline(Learning to write Academic Essays in English) and abilities to think
critically(for general and discipline neutral use), provides a powerful incentive to look more
closely at possible consequences of integrating this instructional techniques more widely into
educational curricula. Indeed, the findings of this study concerning the effectiveness of the
instructional techniques and materials for critical thinking in improving students‟ abilities to think
critically hold important implications for several groups of people, including educators, business
leaders, and society.
From the view point of educators, future employers, and society in general, training students to
think critically is among the principal tasks of the educational system. Critical thinking abilities
such as analyzing complex issues and situations and generating solutions, making connections
and transferring insights to new contexts, and developing standards for decision making are
necessary to success in society. If educators truly want their students to have high level thinking
abilities and if society really needs its citizens to be able to think critically, they must influence
faculty and institutions to integrate explicit instruction in critical thinking into all levels of
schooling in all academic areas.
195
For educators, understanding both the nature of learning to think critically and methods of
instruction through which this can be done are essential. There is little evidence that most students
will improve in their abilities to think critically simply by attending classes – even if the teacher
or instructor is a good critical thinker and uses critical thinking in planning his or her lessons.
There is on the other hand, much evidence, including this study, to show that if we want our
students to think critically, we must explicitly teach them how to do so. In the present study,
training in critical thinking was both direct and intense. Similarly, to improve as critical thinkers,
students must be taught components of core critical thinking skills (the Delphi Study Report,
1990) explicitly and thoroughly, and they should be provided with frequent practice in using the
instructional techniques and materials. These instructional techniques and materials need to be
deeply integrated into course content, not just introduced or used a few times during a semester.
Implicit modeling of critical thinking combined with a few scattered lessons providing critical
thinking practice (as emphasized by a number of studies) are not likely to be effective for most
students. The most essential implication of this study may be the importance of recognizing the
need for explicit and intense training for critical thinking.
Further, additional implication of this study is that educators might also reasonably consider the
challenge involved in learning to think critically could have a positive/negative impact on
students‟ dispositions (attitudes or motivation) toward using critical thinking. This study indicates
that this concern is necessarily valid. One of the findings from this study show that students‟
overall/total Critical Thinking Dispositions as measured by California Critical Thinking
Dispositions Inventory (CCTDI) toward learning to think critically and using critical thinking
skills wherever necessary did appear to differ significantly from students‟ attitudes toward a more
traditional approach to learning. Though some dispositional aspects such as Truth-seeking, openmindedness, and cognitive maturity did not vary between the experimental and control group,
results from CCTDI indicate that the two groups are significantly differed in overall attitudes.
This holds important implication for educators or instructors that explicit instruction in critical
thinking can also lead to attitudinal or motivational learning.
196
5.4. Limitations
1. The results of this study were only dependent on integrating explicit instructional
techniques in critical thinking into a particular area of academic EFL writing course.
Integrating these techniques into other course content might produce different results.
Further research is clearly needed to explore the generalizability of these findings.
2. The level of instructor training required to successfully integrate the instructional
techniques in critical thinking and critical thinking classroom support materials into the
course content was another limitation of this study. The instructor for this study learnt to
use the techniques and materials from videos and handbooks. While it might be possible
to learn to use the techniques from videos and handbooks only, instructors receiving and
participating in an intensive training, professional development workshops, might find
different results.
3. The use of only three outcome measures to assess the effectiveness of CT is another
limitation. While the measures of critical thinking to learning offer an important insight
into the effectiveness of a critical thinking approach to learning, they reveal little about
the process of this learning experience. In programme evaluation, both the outcome and
process are important to point out the better.
4. A further limitation of this study was that the time span of 13 weeks may not be long
enough for significant changes to take place as one‟s critical thinking and attitude toward
thinking and learning takes time to develop and takes place throughout life.
5. This finding reinforces the importance of providing explicit instruction in critical thinking
rather than simply viewing critical thinking as an implicit goal of a course. However, the
absence of special training in which the researcher received in teaching critical thinking
before the experimental treatment may impact the effectiveness of the instruction and
students‟ results. Studies found that interventions in which educators received special
training in teaching critical thinking had the largest effect-sizes, compared to studies in
which course curricula were simply aligned to critical thinking standards or critical
thinking was simply included as an instructional objective. Thus, successful interventions
may require professional development for teachers specifically focused on teaching
critical thinking (Abrami et al., 2008). In order to develop in learners the ability to think
critically, teachers themselves must possess this dispositional and cognitive capacity.
197
5.5. Recommendations

Students in this study who received training in critical thinking techniques and
materials for critical thinking improved their abilities to write thoughtfully, general
abilities to think critically, and dispositions toward critical thinking. Whether these
results will continue over time and be transferred to other settings is an open question.
One possible area for research is to do a follow-up study on students who participated
in this study to see if students are using the critical thinking abilities they gained and if
they are more likely to apply them in everyday situations when compared to students in
the control group.

This study focused only on undergraduate level students and on particular subject area.
Although the findings of this study indicate significant benefits from integrating
instructional techniques and materials into the curriculum, carefully conducted
empirical studies should be done at different grade levels and in a variety of subject
matter.
 Just because learning to think critically was one of the dependent variables in this
study, the Ennis-Weir Critical Thinking Essay Test (Ennis & Weir, 1985; Ennis, 2005)
was used to measure the outcome in students‟ abilities to reason on everyday reasing
tasks/subjects. Other critical thinking assessment instrument, for example, the
California Critical Thinking Skills Test (Facione, 1992), a multiple choice critical
thinking instrument, might be used. It would also be important to test for changes in
other subject areas using instruments appropriate for that content.

It is at least conceivable that the increase in critical thinking ability may not be
completely attributed to only the changes occurred in the sample undergraduate
Academic Writing skills (EnLA 1012) students at AAU this year. However, there is no
question that this new approach kick started the critical thinking ability of all students
that completed the course in a way that far exceeded the previous two semesters (the
pilot and main studies). This is evident both from objective evaluation of critical
thinking ability with the Ennis-Weir test, CCTDI and Instructor developed AWST.
Moreover, the results and approach from this research can easily be applied to other
areas at AAU. Thus, future research should also continue to explore the potential
impact this pedagogical approach has on different disciplines for professional
development in the country.
198
References
Abrami, P.C., Bernard, R. M., Borokhovski, E., Wade, A., Surkes, M. A., Tamim, R., & Zhang,
Dai. (2008). Instructional interventions affecting critical thinking skills and dispositions: A
stage 1 meta-analysis. Review of Educational Research, 78(4), 1102-1134.
Abrams, Z. I. (2005). Asynchronous CMC, Collaboration and the Development of Critical
Thinking in a Graduate Seminar in Applied Linguistics. Canadian Journal of Learning and
Technology. 31 (2).
Ackerman, J. M. (1993). The promise of writing to learn. In J.I. Quitadamo and J.M. Kurtz,
2007. Learning to improve : Using writing to increase critical thinking performance. Life
Sciences Education, 6(2), 140-154.
Adler, M. J. (1982). The paideia proposal: An educational manifesto. New York: Macmillan.
Alcon, E.G. (1993). High Cognitive Question in NNS group discussion: do they facilitate
comprehension and production of the foreign language? RELC Journal 24, 73-85.
Allwright,
R. (1979).
Language learning through communication
Practice.
In The
Communicative Approach to Language Teaching. Eds C.J. Brumfit and K. Johnson,
pp. 167-182. Oxford University Press, Oxford.
Angelo, T. A., and Cross, K. P. (1993). Classroom assessment techniques: A handbook for
college Teachers(2nd ed.). San Francisco: Jossey Bass.
Applebee, A.N. (1984). Writing and reasoning. In J. I. Quitadamo and J.M. Kurtz, 2007. Learning
to improve: Using writing to increase critical thinking performance. Life Sciences Education,
6(2), 140-154.
Astleitner, H. (2002). Teaching critical thinking. Journal of Instructional Psychology, 4, 39-50.
Atkins,J. Gebremedhin, S. and Hailemichael, A.(1996) College English. Department of Foreign
Languages and Literature. Addis Ababa. AAU Press.
Ayaduray, J. and Jacobs, G.M. (1997) “Can Learner Strategy Instruction Succeed? The Case of
Higher Order Questions and Elaborated Responses. System Journal, Vol. 25, No. 4,
pp. 561-570.
Bailin, S. (2002).Critical thinking and science education. Science & Education, 11(4), 361-375.
Bailin, S., Case, R., Coombs, J. R., & Daniels, L.B. (1999). Conceptualizing critical thinking.
Journal of Curriculum Studies, 31(3), 285-302.
199
Baker, L. and Brown, A.L. (1984). Metacognitive Skills and Reading. In Handbook of Reading
Research, ed. P.D. Pearson, pp. 353-394. Longman. New.
Baker, M., Rudd, R., and Pomeroy, C. (2000). Critical and Creative Thinking. Relationships
between Critical and Creative Thinking. Journal of Education. USA.
Bangert-Drowns, R.L., and Bankert, E. (1990). Meta-analysis of effects of explicit instruction for
critical thinking.Paper presented at the meeting of the American Educational Research
Association, Boston, MA. (ERIC Document Reproduction Service No. ED 328 614).
Baron, J. (2008). Thinking and Deciding (4thed). Cambridge: Cambridge University Press.
Baron, R.A. (2001). Psychology (5thed). Boston: Pearson Prentice Hall.
Baxter-Magolda, M.B. (1992). Knowing and reasoning in college: gender-related patterns in
students‟ intellectual development.San Francisco: Jossey-Bass.
Bean, J.C. Johnson, J. and Ramage, J.D. (2003). The Allyn and Bacon Guide to Writing. (3rded).
New York: Longman.
Belenky, M. F., Clinchy, B. M., Goldberger, N. R., and Tarule, J. M. (1986). Women‟s ways of
knowing: The development of self-voice and mind. New York: Basic Books.
Bereiter, C and Scardamalia, M. (1983). “Does Learning to Write have to be Difficult?” In
Freedman, a, I, Pringle and J. Yaldea (eds). Learning to Write First /Second
Language. London: Longman.
Bereiter, C and Scardamalia, M. (1985). Cognitive coping strategies and the problem of “inert
knowledge.” In S.Chipman, J.Segal, & R. Glaser (Eds.), Thinking and learning skills:
Research and open questions. Vol.2, pp.65-80. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Bereiter, C and Scardamalia, M. (1987). The psychology of written composition. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Bereiter, C and Scardamalia, M. (1989). Intentional learning as a goal of instruction. In L.
Resnick (Ed.), Knowing, learning, and instruction (pp.361-391). Hillsdale, NJ:
Lawrence Erlbaum Associates.
Bernasconi, L. (2008). The jewels of ERWC instruction. California English, 14(1), 16-19.
Retrieved from http:/www.cateweb.org/California-english/index.html
Beyer, B. K. (1985). Teaching critical thinking: A direct approach. Social Education. 49(4), 297303.
Beyer, B. (2008). How to teach thinking skills in social studies and history. Social Studies, 99(5),
200
196-201. Retrieved from http:/www.socialstudies.org/
Biber, D. (1988). Variation across speech and writing. Cambridge: Cambridge University Press.
Biber, D., Johansson, S., Leech, G., Conrade, S., & Finegan, F. (1999). Longman grammar of
spoken and written English. Harlow, Essex: Pearson.
Bloom, B.S. (1956). Taxonomy of Educational Objectives: classification of Educational Goals.
David McKay. New York.
Bloom, B. (Ed.). (1956). Taxonomy of educational objectives. handbook I: Cognitive domain.
New York: David Mckay Company
Bolander, D. O. (1987). New Webster‟s dictionary of quotations and famous phrases. New York:
Berkeley Books.Bok, D. (2006). Our underachieving colleges. Princeton, NJ: Princeton
University.
Bond, M. H. (1991). Beyond the Chinese face. Insights from psychology, Oxford University
Press. Hong Kong.
Bonk, C. J., & Smith, G. S. (1998). Alternative instructional strategies for creative and critical
thinking in the accounting curriculum. Journal of Accounting Education, 16(2), 261-293.
Bransford, J. D., Sherwood, R. D., and Sturdevant, T. (1987). Teaching thinking and problem
solving. In J. Baron and R. Sternberg (Eds.), Teaching thinking skills: Theory and
Practice (162-181). New York: W. H. Freeman Co.
Brookfield, S. (2006). The skillful teacher: On technique, trust, and responsiveness in the
classroom. San Francisco: Jossy-Bass.
Brooks, J., & Brooks, M. (1993). The case for constructivist classrooms. Alexandria, VA:
Association of Supervision and Curriculum Development.
Brown, A.L. and Palinesar, A. S. (1982). Inducting Strategic Learning from text by means of
informed, self-control training. Topics in Learning and Learning Disabilities 2, 1-17.
Brown, H. D. (1993). Principles of Language Learning and Teaching (3rded). New York: Prentice
Hall Regents.
Brown, H.D. (1987). Principles of Language Learning and Teaching (2nd ed). Englewood Cliffs,
NJ: Prentice-Hall.
Browne, M.N., and Keeley, S.M. (2007). Asking the Right Questions: A Guide to Critical
Thinking (8th ed.). New Jersey. Pearson Prentice Hall.
201
Browne, M. N., and Keeley, S. M. (1988). Do college students know how to “think critically”
when they graduate? Research Serving Teaching. 1(9), 2-3. Center for Teaching and Learning:
Southeast Missouri State University, Cape Girardeau, MO. (ERIC Document
Reproduction Service No. ED 298 442)
Browne, M. N., and Keeley, S. M. (1994). Asking the right questions: A guide to critical
thinking(4th edition). Englewood Cliffs, NJ: Prentice-Hall.
Bruning, R. H., Schraw, G. J., Norby, M. M., and Ronning, R. R.(2004).Cognitive psychology
and instruction(4thed). Upper Saddle River, New Jersey: Pearson Prentice Hall.
Byrne, D. (1988). Teaching Writing Skills. Longman: Longman Group DK. Limited.
Cam, P. (1995). Thinking Together. Primary English Teaching Association, Sydhey.
Campbell, D., and Stanley, J. (1963). Experimental and quasi-experimental designs for research.
In N. I ., Gage(Ed.), Handbook of research on teaching(pp. 1-76) Chicago: Rand McNal.
Case, R. (2005). Moving critical thinking to the main stage. Education Canada, 45(2), 45-49.
Chaffee, J. McMahon, C. and Stout, B. (2002). Critical Thinking, Thoughtful Writing. A Rhetoric
with Readings (2nd ed). New York City University Press.
Champagne, A. and Kouba, V. (1999). Written product as performance measures. In J. Mintzes, J.
Wandersee and J. Novak(eds), New York: Academic Press.
Chang, Y., & Swales, J. (1999). Informal elements in English academic writing : Threats or
opportunities for advanced non-native speakers. In C, Candlin & K. Hyland (Eds.).
Writing text, process and practices (pp.145-167). London: Longman.
Chi, M. T. H., Feltovich, P. J., and Glaser, R. (1981). Categorization and representation of physics
knowledge by experts and novices. Cognitive Science 5. 121-152.
Clinchy, B. (1994). On critical thinking and connected knowing. In K.S. Walters(Ed.). Rethinking reason: New perspectives in critical thinking. Albany: State University of New York
Press.
Christison, M.A. (2002). Brain Based Research and Language Teaching. English Teaching
Forum. Vol. 40/2, pp. 2-7.
Cohen, A.D. and Macaro, E. (2007). Language Learners Strategies. Oxford: Oxford University
Press.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological
202
Measurement, 20(1), 37-46
Collins, P. (1991). The modals of obligation and necessity in Australian English. In K. Aijmer &
B. Altenberg (Eds.). English corpus linguistics (pp.145-165). New York:Longman.
Cottrell, S. (2005). Critical thinking. Developing effective analysis and argument. New York:
Palgrave Macmillan.
Creswell, J. (2008). Educational research: Planning, conducting, and evaluating quantitative and
qualitative research.New Jersey: Pearson: Merrill Prentice Hall.
Cresswell, J.W. (2009). Research Design: Qualitative, Quantitative, and Mixed Methods
Approaches (3rded) USA: Sage.
Cross, D. R., & Paris, S. G. (1988). Developmental and instructional analyses of children‟s
metacognition and reading comprehension. Journal of Educational Psychology,
80(2), 131-142.
Curriculum Planning Division (1991). English Language Syllabus (Secondary). Ministry of
Education Singapore.
Cuban, L. (1984). Policy and Research Dilemmas in the Teaching of Reasoning: Unplanned
Designs Review of Educational Research. 54(4), 655-681.
Dawit, M. (2008). Prospective and in-service Teachers‟ Thinking about Teaching and Learning:
A Metaphorical Analysis. Ethiopian Journal of Education, Volume xxviii(1), 49-72
Dewey, J. (1909). Moral Principles in Education. Boston: Houghton Mifflin Company.
Dewey, J. (1956). Experience and Education. New York: Macmillan.
DeRoma, V. M., Martin, K. M., and Kessler, M. L. (2003). The relationship between tolerance for
ambiguity and need course structure. Journal of Instructional Psychology 30, 104109.
Derry, S., Levin, J. R., Schauble, L. (1995). Stimulating Statistical thinking through situated
simulations. Teaching of Psychology, 22, 51-57.
Dickerson, P. S. (2005). Nurturing critical thinkers. Journal of Continuing Education in Nursing,
36, 68-72.
Diller, K.C (1981) Individual Differences and Universals in Language Learning Aptitude,
Rowley, Mass: Newburry house.
Dillenbourg, P., Baker, M., Blaye, A., & O‟Malley, C. (1996). The evolution of research on
203
collaborative learning. In E. Spada & P. Reiman(Eds.), Learning in humans and
machine: Towards an interdisciplinary learning science (pp.189-211). Oxford, England; Elsevier.
Doughty, C. and Pica, T. (1986). Information gap tasks: do they facilitate second language
acquisition? TESOL Quarterly 20, 305-325.
Dudley-Evans, T., & St. John, M.J. (1998). Developments in English for specific purposes.
Cambridge: Cambridge University Press.
Ehrman, M.E., and Oxford R.L. (1995). Cognition Plus: Correlates of language learning success.
Modern language Journal 79, 67-89.
Elder, L., & Paul, R. (2009). Critical thinking: Strategies for improving student learning, Part III.
Journal of Developmental Education, 3, 40.
Ellis, R. (2001). Investigating form-focused instruction. In R. Ellis(Ed.). Form-focused instruction
and second language learning(Language Learning 51: Supplement 1)(pp.1-46). Ann
Arbor: University of Michigan/Blackwell.
Ennis, R.H. (2003). “Critical Thinking Assessment” in Fasko-Critical Thinking and Reasoning:
Current Research, Theory, and Practice.
Ennis, R. H. (1985). A logical basis for measuring critical thinking skills. Educational
Leadership, 43(2), 44-48.
Ennis, R. H. (1989). Critical thinking and subject specificity: Clarification and needed research.
Educational Researcher, 18(3), 4-10.
Ennis, R. H. (1987). A taxonomy of critical dispositions and abilities. In J. Baron, and R.
Sternberg (Eds.). Teaching thinking skills: Theory and practice(9-26). New York:
W. H.
Freeman Co.
Ennis, R. H. (1996). Critical thinking. Upper Saddle River, NJ: Prentice Hall.
Ennis, R. H., and Weir, E. (1985). The Ennis-Weir critical thinking essay test. Pacific Grove, CA:
Midwest.
Facione, P. A. (1984). Toward a theory of critical thinking. Liberal Education. 70(3), 253-261.
Facione, P. A. (1986). Testing college-level critical thinking. Liberal Education.72(3), 221-231.
Facione, P. A. (1990). Critical Thinking : A statement of expert consensus for purposes of
educational assessment and instruction. Research findings and recommendations.
American Philosophical Association. New York, DE. (ERIC Document Reproduction Service No.
ED 315423)
Facione, P. A. (1992). The California Critical Thinking Skills Test. Millbrae, CA: California
204
Academic Press.
Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and
relation to critical thinking skill. Informal Logic, 20(1), 61-84.
Facione, P. A., and Facione, N. C.(1992). The California Critical Thinking Dispositions
Inventory. Millbrae, CA: California Academic Press.
Foundation for Critical Thinking. (1996). Critical thinking workshop handbook. Santa Rosa, CA:
Author.
Facione, P. A. (2007). Critical Thinking: What it is and why it Counts. From
http://web.en.wikipedia.org/wiki/critical-thinking. Retrieved April, 18, 2011.
Fairclough, N. (2001). Language and power. Pearson Publishers.
Fisher, A. (2001). Critical Thinking. An Introduction. Cambridge: Cambridge University Press.
Fisher. A. and Scriven, M. (1997). Critical Thinking: Its Definition and Assessment, Center for
Research in Critical Thinking. UK: Edge Press (US).
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitivedevelopmental inquiry. American Psychologist, 34(10), 906-911.
Forester, N. and Steadman, J.M. (1952). Thinking. Cambridge: Cambridge University Press.
Foundation for Critical Thinking (1996). Retrieved on June 7, 2013, from
http://www.criticalthinking.org/resources/articles/content-thinking.shtml
Freire, P. (1973). Education for critical consciousness. New York: The Seabury Press.
Gass, S. M. and Varonis, E.M. (1985). Task Variation and Native, non-native negotiation of
meaning. In input in second language Acquisition, eds S.M. Glass and C.G. Malden,
p.p. 149-162. Newburry House: Rowley, MA.
Gass, S.M. and Varonis, E.M. (1985). Task Variation and native-non-native negotiation of
meaning. In input in second Language Acquisition, eds S.M. Gass and C. G. Madden,
pp. 149-162. Newburry House, Rowley, MA.
Gay, L., Mills, G., & Airasian, P. (2006). Educational research: Competencies for analysis and
applications. New Jersey: Pearson Education, Inc.
Geremew Lemu (1999). A Study on the Academic Writing Requirements. Four Departments in
Focus in Addis Ababa University (Unpublished Ph.D Thesis) Addis Ababa
University.
Glaser, E. M (1941). An Experiment in the Development of Critical Thinking. New York, Bureau
of Publications, Teachers College, Columbia University.
205
Glaser, R. (1992). Expert knowledge and processes of thinking. In D. F. Halpern (Ed.), Enhancing
thinking skills in the sciences and mathematics(pp.63-76). Hillsdale, NJ: Erlbaum.
Hale, G., Taylor, C., Bridgeman, B., Carson., J., Kroll, B., & Kantor, R. (1996). A study of writing
tasks assigned in academic degree programs (Research Report 54). Princeton, NJ:
Educational Testing Service.
Hallgren, K. A. (2012). Computing Inter-Rater Reliability for Observational Data: An Overview
and Tutorial. Tutorials in Quantitative Methods for Psychology, Vol. 8(1), p. 23-34.
Halonen, J. S. (1995). Demystifying critical thinking. Teaching of Psychology, Vol. 22, no. 1, pp.
75- 81.
Halpern, D. F. (1993). Assessing the effectiveness of critical thinking instruction. The Journal of
General Education. 42(4), 238-254.
Halpern, D. F. (1996). Thought and knowledge : An introduction to critical thinking.(3rd ed.).
Mahwah, NJ: Erlbaum.
Halpern, D. F. (1998). Teaching critical thinking for transfer across domains. American
psychologist. 53(4), 449-455.
Halpern, D. F. (1999). Teaching for Critical Thinking: Helping college students develop the skills
and dispositions of a critical thinker. New directions for teaching and learning. 80(1), 69-73.
Halpern, D. F. (2001). Assessing the effectiveness of critical thinking instruction. The Journal of
General Education, 50(4), 270-286.
Halpern, D. F. (2007). The nature and nurture of critical thinking. Critical thinking in psychology
(pp.1-14). New York: Cambridge University.
Hand, B. and Prain, V. (2002). Teachers implementing writing-to- learn strategies in junior
secondary science: A case study. Science Education, 86/6, 737-755.
Harrigan, A. and Vincenti, V. (2004). Developing higher-order thinking through an intercultural
assignment. A scholarship of teaching inquiry project. College Teaching, 52, 113-120.
Hayes, K., and Devitt, A. (2008). Classroom discussions with student-led feedback: a useful
activity to enhance development of critical thinking skills. Journal of Food Science Education,
7(4), 65-68. Retrieved April, 10, 2o13 from http://www.ift.org/knowledge-center/read-iftpublications/journal-of-food.science-education.aspx
Hedge, T. (1988). Writing. Oxford: Oxford University Press.
Helms, J. E. (Ed.). (1990). Black and white racial identity: Theory, research, and practice.
Westport, CN: Greenwood Press. Johnson, R. H. (1996). The Rise of Informal Logic. Newport
206
News, VA: Vale Press.
Healy, J. (1990). Endangered minds why our children don‟t think. New York: Simon & Schuster.
Hennessey, M. G. (1999). Probing the dimensions of metacognition: Implications for conceptual
change teaching-learning. Paper presented at the annual meeting of the National
Association for Research in Science Teaching. Boston, MA.
Heyman, G. D. (2008). Children‟s critical thinking when learning from others. Current Directions
in Psychological Science, 17(5), 344-347.
Hillocks, G., Jr. (1986). Research on Written Composition: New Directions for teaching. USA:
ERIC.
Hinkel, E . (2004). Teaching Academic ESL Writing: Practical Techniques in Vocabulary and
Grammar. Mahwah. NJ: Lawrence Erlbaum Associates.
Horrowitz, D. (1986). What professors actually require: Academic tasks for the ESL classroom.
TESOL Quarterly, 20(4), 445-462.
Hoye, L. (1997). Adverbs and modality in English. London: Longman.
Hummel, J. E., and Holyoak, K. J. (1997). Distributed representations of structure: A theory of
analogical access and mapping. Psychological Reviews, 104, 427-466.
Hunston, S., Francis, G. (2000). Pattern grammar. Amsterdam: John Benjamins.
Hyland, K. (1998). Hedging in scientific research articles. Amsterdam/Philadelphia: John
Benjamin Publishing Company.
Italo Beriso (1999) “A Comparison of the Effectiveness of Teacher Versus Peer Feedback on
Addis Ababa University Students Writing Revisions” (Unpublished Ph.D. Thesis).
Addis Ababa University.
Jenkins, R. (2009). Our students need more practice in actual thinking. The Chronicle of Higher
Education, 55(29) B18.
Jensen, E. (2005). Teaching with the Brain in mind (2nd Ed.). Alexandria, VA: Association of
Supervision and Curriculum Development.
Johns, A.M, (1990). “L1 Composition Theories: Implications for Development Theories of L2”
Second Language Writing. Cambridge: CUP.
_________.(1990).”Composition Theories : Implications for Developing Theories of Second
Language” Second Language Writing .Cambridge: CUP
Johns, A. (1981). Necessary English: A faculty survey. TESOL Quarterly, 15(1), 51-57.
Johns, A. (1997). Text, role, and context: Developing academic literacies. Cambridge: CUP.
207
Jordan, R. (1997). English for academic purposes. Cambridge:CUP.
Jungst, S. E., Thompson, J.R., and Atchison, G. J. (2003). Academic controversy: Fostering
constructive conflict in natural resources education. Journal of Natural Resources and
Life Sciences Education, 32, 36-42.
Keeley, S. M., and Browne, M. N. (1986). How college seniors operationalize critical thinking
behavior. College Student Journal, 20, 389-95.
Keeley, S. M., Browne, M. N., and Kreutzer, J. S. (1982). A comparison of freshmen and seniors
on general and specific essay tests of critical thinking. Research in Higher Education. 17(2),
139 - 154.
Kelly-Riley, D., Brown, G., Condon, B., and Law, R. (2007).Washington State University critical
thinking project. Retrieved April 27, 2012 from http://wsuctproject.ctlt.wsu.edu/ctm.htm.
Kennedy, M., Fisher, M. B., & Ennis, R. H. (1991). Critical thinking: Literature review and
needed research. In L. Idol & B.F. Jones (Eds.). Educational values and cognitive instruction:
Implication for reform(pp.11-40). Hillsdale, New Jersey: Lawrence Erlbaum &
Associates.
Kennedy, G. (1991). Between and through: The company they keep and the functions they serve.
In K. Aljmer and B. Altenberg(Eds.). English corpus linguistics (pp.95-110). New York:
Longman.
Klaczynski, P. A .(2001). Framing effects on adolescent task representations, analytic and
heuristic processing, and decision making: Implications for the normative/descriptive gap. Journal
of Applied Development Psychology, 22, 289-309.
King, A. (1990). Enhancing peer interaction and learning in the classroom through reciprocal
questioning. American Educational Research Journal. 27(4), 664-687.
King, A. (1994). Inquiry as a tool in critical thinking. In D. F. Halpern(Ed.) Changing college
classrooms: New teaching and learning strategies for an increasingly complex world(1338). San Francisco: Jossey-Bass.
King, P., and Kitchener, K. (1994). Developing reflective judgment: Understanding and
promoting intellectual growth and critical thinking in adolescents and adults. San Francisco,
CA: Jossey-Bass.
King, A. (1990). Enhancing Peer Interaction and learning in the classroom through reciprocal
questioning. American Educational Research Journal 27, 664-687.
208
Kjellmer, G. (1991). A mint phrases. In K. Aijmer & B. Altenberg(Eds.). English corpus
linguistics (pp.111-127). New York: Longman.
Kosonen, P., and Winne, P. H .(1995). Effects of teaching statistical laws on reasoning about
everyday problems. Journal of Educational Psychology, 87, 33-46.
Krashen, S.D. (1985). The Input Hypothesis: Issues and Implications. London: Longman.
Kuhn, D. (1992). Thinking as argument. Harvard Educational Review, 62(2), 155-178.
Kuhn, D. (1999). A developmental model of critical thinking. Educational Researcher, 28(2), 1626.
Kuhn, D., & Dean, D. (2004). A bridge between cognitive psychology and educational practice.
Theory into Practice, 43(4), 268-273.
Kurfiss, J. G. (1988). Critical thinking: Theory, Research, practice, and possibilities. ASHE-ERIC
Higher Education Report No.2. Washington DC: George Washington University.
Kabilan, K.M. (2000). Creative and critical thinking in language classroom. Internet TESL
Journal, 6/6. http://iteslj.org/Techniques/Kabilan-CriticalThinking.html
Kelly, G. J. and Chen, C. (1999). The sound of music: constructing science as sociocultural
practices through oral and written discourse. Journal of Research in Science Teaching, 36/8, 883915.
Keys, C. W. (1999). Revitalizing instruction in scientific genres: connecting knowledge
production with writing to learn in science. Science Education, 83/2, 115-130.
Klein, P. D.
(2004). Constructing scientific explanations through writing . Life Science
Education 32/3, 191-231.
Kurfiss J. G. and ASHE. (1988). Critical thinking: theory, research, practice, and possibilities.
Washington, DC: George Washington University.
Langer, J. A. and Applebee, A.N. (1987). How writing shapes thinking: a study of teaching and
learning. NCTE Research Report no. 22. Urbana, IL: National Council of Teachers of
English.
Lehman, D. R., and Nisbett, R. E. (1990). A longitudinal study of the effects of undergraduate
training on reasoning. Developmental Psychology, 26, 431-442.
Leki, I. (1999). Academic writing: techniques and tasks (3rd ed.). New York: CUP.
Leki, I., & Carson, J. (1997). “Completely Different Worlds”: EAP and the writing experiences of
209
ESL students in university courses. TESOL Quarterly, 31(1), 39-70.
Limbach, B., & Waugh, W., & Duron, R. (2006). Critical thinking framework for any Discipline.
International Journal of Teaching and Learning in Higher Education, 17(2), 160-166.
Lipman, M. (2003). Thinking in education. West Nyack, NY, USA: Cambridge University Press.
Lewis, A., and Smith, D. (1993). Defining higher order thinking. Theory into Practice. 32(3), 131137.
Lipman, M. (1988). Critical thinking: what can it be? Analytic Teaching. 8, 5-12.
Lipman, M. (1988). Critical thinking – What can it be? Educational Leadership, 46(1), 38-43.
Long, M.H. (1983). Linguistics and Conversational adjustments to non-native speakers. Studies in
second language acquisition 5, 177-193.
Long, M.H. and Porter, P.A. (1985). Group Work, interlanguage talk and second language
acquisition. TESOL Quarterly 19, 207-228.
Lunsford, A., & Ruszkiewicz, j. (2001). Everything‟s an argument. Boston: Bedford/St. Marttin‟s.
Maimon, E.P. Peritz, J.H. and Yancey, K.B. (2009). A Writer‟s Resource: A Handbook for
Writing and Research (2nded). USA: McGraw Hill.
Martinez, M.E. (2006). What is metacognition? Phi Delta Kappan, 87(9), 696-699.
Marzano, R. J. (1991). Fostering thinking across the curriculum through knowledge restructuring.
Journal of Reading, 34/7, 518-525.
Marzano, R.J. (2007). The art and science of teaching. Alexandria, VA: Association of
Supervision and Curriculum Development.
Marzano, R. J., Brandt, R. S., Hughs, C. S., Jones, B.F., Presseisen, B.Z., Rankin, S. C., Suhor, C.
(1988). Dimensions of thinking : a framework for curriculum and instruction. Alexandria,
VA: Association for supervision and Curriculum Development.
Mason, M. (2008). Educational Philosophy and Theory : Critical Thinking and Learning. USA.
Blackwell Publishing.
Matheny, G. (2009). The knowledge vs. skills debate: A false dichotomy? Leadership, 39-40.
McDonough, S. (1985). “Academic Writing Practice” ELT Journal. Vol. 39 No. 4 p. 244-247.I.
McNamara, T. (2000). Language Testing. Oxford: Oxford University Press.
McMillan, J. H. (1987). Enhancing college students‟ critical thinking: A review of studies.
Research in Higher Education. 26(1), 3-29.
210
McPeck, J. E. (1981). Critical thinking and education. New York: St. Martin‟s Press.
McPeck, J. E. (1990). Critical thinking and subject specificity: A reply to Ennis. Educational
Researcher, 19(4), 10-12.
Mendelmen, L. (2007). Critical thinking and reading. Journal of Adolescent and Adult Literacy,
51(4), 300-304. Retrieved April, 10, 2013 from
http://www.reading.org/General/Publications/Journals/jaal.aspx
Mertler, C. & Charles, C. (2008). Introduction to Educational Research. Boston: Pearson
Education.
Mesher, D. (2005). Mission: Critical. A project of the Institute for Teaching and Learning.
Retrieved June 17, 2013 from http://www2.sjsu.edu/depts/itl/.
Miller, C. D., Finley, J., and McKinley, D. L. (1990). Learning approaches and motives: Male and
female differences and
implications for learning assistances programs. Journal of
College Student Development. 31,147-154.
Mirman, J. and Tishman, S. (1988). Infusing thinking through connections. Educational
Leadership, 45/7, 64-65.
Ministry of Education. (1994). New Education and Training Policy. Addis Ababa.
Murphy, L. L., Conoley, J. C., and Impara, J. C. (Eds.). (1994). Tests in print iv. Lincoln, NE:
University of Nebraska Press.
National Education Goals Panel. (1991). The national education goals report: Building a nation of
learners. Washington, DC: U.S. Government Printing Office .
Naiman, N., Frohlick, M., Stern, H. and Todesco, A. (1978). The Good Language Learner.
Research in Education in System Journal vol. 23(3), pp. 359-386. USA: University of
Alabama.
Nelson, C. E. (1994). Critical thinking and collaborative learning. New Directions for Teaching
and Learning, 1994(59), 45-58.
Newmann, F. M. (1990a). Higher order thinking in teaching social studies: a rationale for the
assessment of classroomthoughtfulness. Journal of Curriculum Studies. 22(1), 41-45.
Newmann, F. M. (1990b). Qualities of thoughtful social studies classes: An empirical profile.
Journal of Curriculum Studies. 22(3), 253-275.
Newmann, F. M. (1991). Higher order thinking in the teaching of social studies: Connections
211
between theory and practice. In J. F. Voss, D. N. Perkins, and J. W. Segal(Eds.).
Informal reasoning and education (pp. 381-400). Hillsdale, NJ: Erlbaum.
Nisbett, R. E. (Ed.), (1993). Rules for reasoning. Hillsdale. NJ: Erlbaum.
Norris, S. P. (1991). Assessment: Using verbal reports of thinking to improve multiple-choice test
validity. In J. F. Voss, D. N. Perkins, and J. W. Segal (Eds.). Informal reasoning and
education(451- 472). Hillsdale, N. J. : Erlbaum.
Norris, S. P., and Ennis, R. H. (1989). Evaluating critical thinking. Pacific Grove, CA: Midwest
Publications.
Norris, J., & Ortega, L. (2000). Effectiveness of L2 instruction: A research synthesis and
quantitative meta-analysis. Language Learning, 50(3), 417-528.
Numela, R. and Rosengren, T. (1986). What‟s Happening in Student‟s Brains may redefine
teaching. Educational Leadership. English Teaching Forum. Vol. 40/2, pp. 2-7.
O‟Malley, J.M. and Chamot, A.U. (1990). Learning Strategies in Second Language Acquisition.
Cambridge: Cambridge University Press.
Ostler, S. (1980). A survey of needs of advanced ESL. TESOL Quarterly, 14(4), 489-502.
Oxford, R.L. (1990). Language Learning Strategies: What Every Teacher Should Know. New
York: Newburgy House /Harper and Row. Now Boston: Heinle and Heinle.
Oxford, R.L. (1992/1993). Language Learning Strategies in a Nutshell. TESOL Journal 2(2), p.
18022.
Partington, A. (1996). Patterns and meanings. Amsterdam: Jon Benjamins.
Patry, J. L. (1996). „Teaching critical thinking.‟ Journal of Instructional Psychology, 4/4, 58-94.
Paul, R. W. (1992). Critical thinking: What, Why, and How? New Directions for Community
Colleges, 1992(77), 3-24.
Paul, R. and Elder, L. (1997). The elements and standards of reasoning: Helping students assess
their thinking.
Paul, R., and Elder, L. (2009a). Close reading, substantive writing, and critical thinking:
foundational skills essential to the educated mind. Gifted Education International,
25(3),
286-295.
Retrieved
June,
23,
2014
fromhttp://www.gifted-
children.com.au/gifted_and_talented_international
Paul, R., and Elder, L. (2009b). Critical thinking: ethical reasoning and fair- minded thinking, part
I. Journal of Developmental Education, 33(1), 38-39. Retrieved June, 20, 2014 from
212
http://www.ncde.appstate.edu/publications/jde/
Paul, R., and Elder, L. (2008b). Critical thinking: strategies for improving student learning, part
II. Journal of Developmental Education, 32(2), 34-35. Retrieved September, 21, 2011 from
http://www.ncde.appstate.edu/publications/jde/
Paul R., and Elder, L. (2006). Critical Thinking Tools for Taking Charge of your Learning and
your Life. New Jersey: Prentice Hall Publishing.
Paul, R. W., & Elder, L. (2006). Critical thinking: The nature of critical and creative thought.
Journal of Developmental Education, 30(2), 34-35.
Paul, R. and Elder, L. (2002). Critical Thinking: Tools for taking Charge of your Professional and
Personal Life. Published by Financial Times Prentice Hall.
Paul, R. and Elder, L. (2008). The Miniature Guide to Critical Thinking concepts and Tools.
Dillon
Beach:
Foundation
for
Critical
Thinking
Press.
From
http://web.en.wikipedia.org/wiki/critical-thinking. Retrieved April, 18, 2011.
Paul,
R.
W.,
and
Elder,
L.
(2007).
Defining
Critical
Thinking.
http://www.
criticalthinking.org/aboutCT/define-critical-thinking. cfm.Retrieved March, 10, 2011.
Paul, R. W. (1993). Critical thinking: What every person needs to survive in a rapidly changing
world (J. Willsen and A, J. A. Binker, Eds.). Santa Rosa, CA: Foundation for Critical
Thinking.
Paul, R., and Elder, L. (1997). Critical thinking: Implications for instruction of the stage theory,
Journal of Developmental Education. 20(3), 34-35.
Paul, R. W., Elder, L., and Bartell, T. (1997). California teacher preparation for instruction in
critical
thinking: Research findings and policy recommendations. Sacramento, CA: California
Commission of Teacher Credentialing.
Paul, R. W., and Nosich, G. M. (1992). A model for the national assessment of higher order
thinking. Santa Rosa, CA: Foundation for Critical Thinking. (ERIC Document
Reproduction Service No, ED 353296).
Paul, R., and Fisher, A. and Nosich, G. (1993). Workshop on critical thinking strategies.
Foundation for Critical Thinking, Sonoma State University, CA.
Perkins, D. N. (1989). Reasoning as it is and could be: An empirical perspective. In D. M.
Topping, D. S, Cromwell, and . N. Kobayaski (Eds.), Thinking across cultures: Third
international conference on thinking ( 175-194). Hillsdale, NJ: Erlbaum.
213
Perkins, D. N., Farady, M., and Bushey, B. (1991). Everyday reasoning and the roots of
intelligence. In J. F. Voss, D.N.Perkins, and J. W. Segal (Eds.). Informal reasoning and
education(83-105). Hillsdale, N. J. : Erlbaum.
Perkins, D. N., Jay, E., and Tishman, S. (1993). Beyond abilities: A dispositional theory of
thinking. The Merrill-Palmer Quarterly 39(1), 1-21
Perkins, D. N., and Grotzer, T. A. (1997). Teaching intelligence. American Psychology, 52, 11251133.
Perry, W. (1970). Forms of intellectual and ethical development in the college years: A scheme
NewYork: Holt, Rinehart.
Petri, G. (2002). Teaching critical thinking. Journal of Instructional Psychology, 16/4, 10-12.
Pica, T., Kanagy, R., and Falodun, J. (1993). Choosing and using communication tasks for second
language instruction. Handbook of Second Language Acquisition, Blackwell Publishing
Ltd, pp. 765-766.
Pica, T., Young, R. and doughty, C. (1987). The Impact of interaction on Communication. TESOL
Quarterly 21, 737-758.
Price,B. and Harrington, A.(2010). Thinking and Writing for Nursing Students. Great Britain
Library: Learning Matters Ltd.
Powell, J.K. and Tassoni J. (2009). College Composition. USA: Miami University Press.
Pullen, A. (1992). Improving Critical Thinking Skills of English Students at Marlboro High
School through Literature and Composition Instruction. Unpublished PhD thesis, Nova
University.
Quellmalz, E. S. (1987). In J. B. Baron and R. J. Sternberg(Eds.), Teaching thinking skills:
Theory and Practice (pp.86- 105). New York: W. H.Freeman.
Raghunathan,
A
(2001).
How
to
Improve
Your
thinking
Skills.
From
http://www.psychology4all.com. Retrieved March, 10, 2011.
Raimes, A. (1983). Techniques in Teaching Writing. Oxford: Oxford University Press.
Renouf, A., & Sinclair, J. (1991). Collocational frameworks in English. In K, Aijmer & B.
Altenberg (Eds.). English corpus linguistics (pp. 128- 143). New York: Longman.
Redfield, D.L. and E.W. Rousseau. (1981). A Meta-analysis of Experimental Research on
Teacher Questioning Behavior. Review of Educational Research. English Teaching
Forum. Vol. 40 No 2, pp. 2-7.
Reid, J. (1993). Teaching ESL Writing. USA: Regents Prentice Hall.
214
Rubin, J. (1975). What the „good Language Learner‟ can teach US. TESOL Quarterly 9, 41-51.
Reed, J. H. (1998). Effect of a Model for Critical Thinking on Student Achievement in Primary
Source Document Analysis and Interpretation, Argumentative Reasoning, Critical
Thinking Dispositions, and History Content in a Community College History Course.
Resnick, L. B. (1987). Education and learning to think. Washington DC: National Academy
Press.
Richards, J. (2002). Accuracy and fluency revisited. In E. Hinkel & S. Fotos(Eds.). New
perspectives on grammar teaching in second language classrooms (pp.35-60). Mahwah.
NJ: Lawrence Erlbaum Associates.
Rosenfeld, M., Leung, S., & Oltman, P. (2001). The reading, writing, speaking, and listening
tasks important for academic success at undergraduate and graduate levels(MS 21). Princeton,
NJ: ETS.
Rivard, L. P. (1994). A review of writing to learn in science: Implications for practice and
research. Journal of Research in Science teaching, 31/9, 969-983.
Sadker, M. & Sadker, D. (2003). Teachers, schools, and society. 6th Ed. New York: McGraw-Hill.
Santos, T. (1988). Professors‟ reactions to the academic writing of nonnative-speaking students.
TESOL Quarterly, 22, 69-90.
Scanlan, J.S. (2006). The effects of Richard Paul‟s universal elements and standards of reasoning
on welfth grade composition. Unpublished M.A thesis, School of Education, Alliant
International University, US.
Schraw, G., Crippen, K. J., & Hartley, K. (2006). Promoting self-regulation in science education:
Metacognition as part of a broader perspective on learning. Research in Science
Education, 36(1-2), 111-139.
Scriven, M., and Paul, R.W. (1987). Critical Thinking as Defined by the National Council for
Excellence in Critical Thinking.
Sears, A., and Parsons, J. (1991). Toward critical thinking as an ethic. Theory and Research in
Social Education, 19, 45- 68.
Seime, Kebede. (2001). Students‟ Perceptions of the Conditions that Reduce or Enhance the
Pedagogic Value of Instructors‟ and Students‟ questions, Volume xxi(2), 1-19.
Siddiqui, S. (2007). Rethinking education in Pakistan. Paramount publishing Enterprise, Karachi.
Siegel, H. (1988). Educating reason: Rationality, critical thinking, and Education. New York:
Routledge.
215
Silva, T. (1990) “Second Language Composition Instruction on Development Issues and
Direction EFL: Second Language Writing. Cambridge: CUP.
Sims, M. (2009). The Write Stuff: Thinking Through Essays. USA: Pearson Prentice Hall.
Skinner, C. ( 1959 ).Educational Psychology(4th.ed). USA: Prentice Hall.
Slavin, R. (2009). Educational psychology theory and practice(9th Ed.). Upper Saddle. River, NJ:
Pearson.
Slavin, R. (2007). Educational research in an age of accountability. Boston: Pearson Education.
Stern, H.H (1983). Fundamentals of Language Teaching. Oxford: Oxford University Press.
Sternberg, R. J. (1987). Teaching intelligence: The application of cognitive psychology to the
improvement of intellectual skills. In J. B. Baron and R. J. Sternberg (Eds.). Teaching
thinking skills: Theory and practice(pp. 182- 218). New York: W. H. Freeman.
Sternberg, R. J. (1986). Critical thinking: Its nature, measurement, and improvement. National
Institute of Education. Retrieved from http://eric.ed.gov/PDFS/ED272882.pdf.
Stevick, E.W. (1976). Memory, Meaning, and Method: some Psychological Perspectives on
Language Learning. Rowley.MA: Newbury House.
Sumner, W. (1940). Folk ways : A study of the Sociological Importance of usages, Manners,
Customs, Mores and Morals.New York: Ginn.
Suter, W. (2006). Introduction to educational research: A critical thinking approach. Thousand
Oaks, CA: Sage Publications, Inc.
Swales, J. (1990a). Genre analysis. Cambridge: CUP
Swales, J., & Feak, C. (1994). Academic writing for graduate students. Ann Arbor, MI:
University of Michigan Press.
Swartz, R. J. (1991). Structured teaching for critical thinking and reasoning in standard subject
area instruction. In J. F. Voss, D. N. Perkins, and J. W. Segal (Eds.). Informal reasoning and
education (415-450). Hillsdale, N. J.: Erlbaum.
Taube, K. T. (1997). Critical thinking ability and disposition as factors of performance on a
written critical thinking test.The Journal of General Education 46(2), 129-164.
Thayer-Bacon, B. J. (2000). Transforming critical thinking: Thinking constructively. New York,
NY: Teachers College Press.
Their, A., Oldakowski, T., & Sloan, D. (2010). Using blogs to teach strategies for inquiry into the
construction of lived and text worlds. Journal of Media Literary Education, 2(1), 23-36.
Retrieved from http:/jmle.org/index.php/JMLE/index
216
Tishman, S., Perkins, D., and Jay, E. (1995). The thinking classroom: Learning and teaching in a
culture of thinking. Boston: Allyn and Bacon.
Thadphoothon, J. (2002). Enhancing critical thinking in language learning through computermediated collaborative learning: A preliminary investigation. Proceedings of the
International Conference on computers in Education.
Tsui, L. (2002). Fostering critical thinking through effective pedagogy: Evidence from four
institutional case studies. Journal of Higher Education, 73, 740-763.
Underbakke, M., Borg, J. M., and Peterson, D. (1993). Researching and developing the
knowledge base for teaching higher order thinking.Theory into practice. 32(3), 138-146.
Valiga, T. M. (2003). Guest editorial. Teaching thinking: Is it worth the effort? Journal of Nursing
Education, 42, 479-480
Van Gelder, T. (2005). Teaching critical thinking. Some lessons from cognitive science. College
Teaching, 53(1), 41-48.
Vann, R., Lorenz, F., & Meyer, D. (1991). Error gravity: Response to errors in the written
discourse of nonnative speakers of English. In L. Hamp-Lyons(ed.). Assessing second language
writing (pp.181-196). Norwood, NJ: Ablex.
Vann, R., Meyer, D., & Lorenz, F. (1984). Error gravity: A study of faculty opinion of ESL
errors. TESL Quarterly, 18(3), 427-440.
Vygotsky, L.S. (1987). The collected Works of L.S. Vygotsky, volume 1. Problems of General
Psychology. Including the volume Thinking and Speech. R.W. Reiber and A.S. Carton
(eds). New York: Plenum Press.
Wade, C. (1995). Using writing to develop and assess critical thinking. Teaching of Psychology,
22(1), 24-28.
Wal, A. V. D. (1999). Critical thinking as a core skill: issues and discussion paper. Paper
presented at HERDSA Annual International Conference, Melbourne.
Wallace, C. (2005). Critical reading in language education. Palgrave Macmillan.
Warren, W.J., memory, D. M., Bolinger, K. (2004). Improving critical thinking skills in the
United States survey course: An activity for teaching the Vietnam War. History Teacher,
37, 193- 209.
Weinstein, M. (1995). Critical thinking : Expanding the paradigm. Inquiry Critical Thinking
across the Disciplines. 15(1), 23-39.
Wellington, C.B. and Wellington, J. (1960). Teaching for Critical Thinking: With Emphasis on
217
Secondary Education. New York: McGraw-Hill Book Company, Inc.
Wenden, A. (1991). Learner Strategies for Learner autonomy. Prentice – Hall International.
London.
Wenden, A. (1997). Designing Learner Training: the Curricular Questions. In Language
Classrooms of Tomorrow: Issues and Responses, ed G.M. Jacobs, pp. 238-262.
SEAMEO Regional Language Centre, Singapore.
Werner, P. H. (1991). The Ennis-Weir Critical Thinking Essay Test: An instrument for testing
and teaching. Journal of Reading. 34(1), 494-495.
Wilks, S. (1995). Critical and Creative Thinking: Strategies for Classroom Inquiry. Eleanor
Curtain, New South Wales, Australia, Armadale.
Williams, M. and Burdon, K.L. (1997). Psychology for Second Language Learners; a Social
Constructivism Approach. UK: Cambridge University Press.
Willingham, D. T. (2007). Critical thinking: Why is it so hard to teach? American Educator, 8-19.
Worrell, J. A., and Profetto-McGrath, J. (2007). Critical thinking as an outcome of context-based
learning among post RN students: A literature review, Nurse Education Today, 27, 420426.
Zamel, V. (1983). The composing processes of advanced ESL students: six case studies. TESOL
Quarterly, 17(1), 165-187.
218
Appendices
Appendix A
Critical Thinking and Academic Essay Writing Skills Packet
This academic writing course emphasizes more about learning to thinking critically about
academic writing in all course work including assignments, class discussions, exams and essays.
As you learn elements and standards of reasoning, and core critical thinking skills, it is
appropriate for you to use them in every aspect of this course as well as in other academic and
everyday situations requiring good reasoning. If you put serious effort into learning and practicing
these aspects of critical thinking, you will improve in your abilities and dispositions (attitudes)
toward thinking critically about readings, textbooks, essays, and exams, and you will become a
better critical thinker in every aspect of life.
This packet contains a package of Critical Thinking Instructional Support Materials intended to
assist you to build critical thinking skills through practice. The packet includes at least the
following materials:
1. Definitions of critical thinking
. General Critical Thinking
. Core
Critical Thinking Skills (or Mental skills/aspects), and a chart showing the core
critical thinking skills
2. A chart showing the elements of reasoning and universal intellectual standards.
3. Definitions of the elements of reasoning
4. The elements of reasoning in reading and writing and questions to ask
5. “Helping Students Assess Their Thinking” points to guide your reasoning and to evaluate the
thinking of others,
6. Explanations of universal intellectual standards through questions you can ask yourself about
your own thinking or that of others,
7. A chart showing the relationship between elements, standards, traits(or dispositions), and
critical thinking abilities/skills,
8. A description of the intellectual traits or dispositions important for a critical thinker,
9. A chart showing how people positively or negatively disposed toward using the dispositional
aspects of critical thinking,
10. Some common reasoning fallacies,
How to Use This Packet
Refer to the chart on elements and standards (p. 226/236) often as you assess the reasoning of
others (e.g., source readings), or your own reasoning (e.g., assignments and essays). Use the
explanations of elements (p.227) and standards (p.231) as often as needed to make sure you
understand the various aspects of reasoning. As the elements and standards become more familiar
to you, begin to examine how your attitudes compare to ideal intellectual traits (p.233) and check
to see if you are developing the abilities you need to be a good critical thinker. Be sure to use
strategies by academic writers (pp.219-222) to analyze and to critique every academically written
219
document you read. The handout on fallacies (p.235) explains some common reasoning errors to
look out for in arguments made by others and to avoid in your own reasoning.
Use your developing critical thinking abilities as often as possible in academic writing class, in
other course work and in everyday decision making and evaluations of relationships.
Appendix A (Continued)
Selected Definitions of Critical Thinking (General):
You might think of Critical Thinking as:
 Thinking about your thinking while you are thinking in order to improve your thinking
(Richard Paul, 1993).
More formally, Critical Thinkink is:
 Reasonable, reflective thinking that is focused on deciding what to believe or do
(Robert Ennis, 1985, Retired Professor of Philosopher Education at the University of
Illinois and Co-author of the Cornell Critical Thinking Test).

The ability and disposition to improve one‟s thinking by systematically subjecting it to
intellectual self-assessment (Richard Paul, Director of the Center for Critical Thinking
and Moral Critique at Sonoma State University, CA., and author of Critical Thinking:
What Every Person Needs to Survive in a Rapidly Changing World, 1993).

A rational response to questions that cannot be answered definitively and for which all
the relevant information may not be available. It is defined here as an investigation
whose purpose is to explore a situation, phenomenon, question, or information and that
can therefore be convincingly justified (Joanne Kurfiss, Developmental Psychologist
and teaching consultant at the University of Delaware, in Critical Thinking: Theory,
Research, Practice and Possibilities, 1988).

Thinking that is purposeful, reasoned, and goal directed. It is the kind of thinking
involved in solving problems, formulating inferences, calculating likelihoods, and
making decisions (Diane Halpern, Psychologist at California State University, in
Thought and knowledge: An Introduction to Critical Thinking, 1996).

The mental processes, strategies, and representations people use to solve problems,
make decisions, and learn new concepts (Sternberg, 1986).

The use of those cognitive skills or strategies that increase the probability of a
desirable outcome (Halpern, 1998).

Purposeful, self-regulatory judgement which results in interpretation, analysis,
evaluation, and inference, as well as explanation of the evidential, conceptual,
220
methodological, criteriological, or contextual considerations upon which that judgment
is based(from the APA Delphi Report. Critical Thinking: A Statement of Expert
Consensus for the purpose of Educational Assessment and Instruction, Facione, 1990).
Appendix A (Continued)
Definitions and Examples of Core Critical Thinking Skills
A person engaged in critical thinking uses a core set of cognitive skills – Interpretation, Analysis,
Evaluation, Inference, Explanation, and Self-regulation to form logical reasoning and judgments
(Facione, 1995, p.3).
As to the cognitive skills, here is what the experts include as being at the very core of critical
thinking: interpretation, analysis, evaluation, inference, explanation, and self-regulation. (we will
get to the dispositions in just a second.) Did any of these words or ideas come up when you tried
to characterize the cognitive skills – mental abilities – involved in critical thinking?
Quoting from the consensus statement of the national panel of experts:
1. Interpretation is ―to comprehend and express the meaning or significance of a wide
variety of experiences, situations, data, events, judgments, conventions, beliefs, rules,
procedures, or criteria.‖ Interpretation includes the sub-skills of categorization, decoding
significance, and clarifying meaning. Can you think of examples of interpretation?




Recognizing a problem and describing it without bias;
Reading a person‘s intentions in the expression on her face;
Distinguishing a main idea from subordinate ideas in a text;
Constructing a tentative categorization or way of organizing something
you are studying;
 Paraphrasing someone‘s ideas in your own words; or,
 Clarifying what a sign, chart or graph means;
 Identifying an author‘s purpose, theme, or point of view.
2. Analysis is ―to identify the intended and actual inferential relationships among
statements, questions, concepts, descriptions, or other forms of representation intended to
express belief, judgment, experiences, reasons, information, or opinion.‖ The experts
include examining ideas, detecting arguments, and analyzing arguments as sub-skills of
analysis. Again, can you come up with some examples of analysis?
 Identifying the similarities and differences between two approaches to the
solution of a given problem;
 Picking out the main claim made in a newspaper editorial and tracing back
the various reasons the editor offers in support of that claim, or
 Identifying unstated assumptions;
 Constructing a way to represent a main conclusion and the various reasons
given to support or criticize it;
221
 Sketching the relationship of sentences or paragraphs to each other and to
the main purpose of the passage;
 Graphically organizing this essay, in your own way, knowing that its
purpose is to give a preliminary idea about what critical thinking means.
3. The experts define evaluation as meaning ―to assess the credibility of statements or other
representations which are accounts or descriptions of a person‘s perception, experience,
situation, judgment, belief, or opinion; and to assess the logical strength of the actual or
intended inferential relationships among statements, descriptions, questions, or other
forms of representation.‖ Here are some examples:






Judging an author‘s or speaker‘s credibility;
Comparing the strength and weaknesses of alternative interpretations;
Determining the credibility of a source of information,
Judging if two statements contradict each other, or
Judging if the evidence at hand supports the conclusion being drawn,
Recognizing the factors which make a person a credible witness regarding
a given event or a credible authority with regard to a given topic,
 Judging if an argument‘s conclusion follows either with certainty or with a
high level of confidence from its premises,
 Judging the logical strength of arguments based on hypothetical situations,
 Judging if a given argument is relevant or applicable or has implications
for the situation at hand.
Are the people you regard as strong critical thinkers have the three cognitive skills described so
far? Are they good at interpretation, analysis, and evaluation? What about the next three? And
your examples of weak critical thinkers, Are they lacking in these cognitive skills? All, or just
some?
4. To the experts inference means ―to identify and secure elements needed to draw
reasonable conclusions; to form conjectures and hypotheses; to consider relevant
information and to educe(develop) the consequences flowing from data, statements,
principles, evidence, judgments, beliefs, opinions, concepts, descriptions, questions, or
other forms of representation.‖ As sub-skills of inference, the experts list querying
evidence, conjecturing alternatives, and drawing conclusions. Can you think of some
examples of inference? You might suggest things like:
 Seeing the implications of the position someone is advocating, or drawing
out or constructing meaning from the elements in a reading, or
 Predicting what will happen next based what is known about the forces at
work in a given situation, or
 Formulating a synthesis of related ideas into a coherent perspective,
 After judging that it would be useful to you to resolve a given uncertainty,
developing a workable plan to gather that information, or
 When faced with a problem, developing a set of options for addressing it,
 Conducting a controlled experiment scientifically and applying the proper
statistical methods to attempt to confirm or disconfirm an empirical
hypothesis.
222
Beyond being able to interpret, analyze, evaluate, and infer, strong critical thinkers can do two
more things. They can explain what they think and how they arrived at that judgment. And, they
can apply their powers of critical thinking to themselves and improve on their previous opinions.
These two skills are called ―explanation‖ and ―self-regulation‖
5. The experts define explanation as being able to present in a cogent and coherent way the
results of one‘s reasoning. This means to be able to give someone a full look at the big
picture: both ―to state and to justify that reasoning in terms of the evidential, conceptual,
methodological, criteriological, and contextual considerations upon which one‘s results
were based; and to present one‘s reasoning in the form of cogent arguments.‖ The subskills under explanation are describing methods and results, justifying procedures,
proposing and defending with good reasons one‘s causal and conceptual explanations of
events or points of view, and presenting full and well-reasoned, arguments in the context
of seeking the best understandings possible. Here are some more examples:
 To construct a chart which organizes one‘s findings,
 To write down for future reference your current thinking on some
important and complex matter,
 To cite the standards and contextual factors used to judge the quality of an
interpretation of a text,
 To state research results and describe the methods and criteria used to
achieve those results,
 To appeal to established criteria as a way of showing the reasonableness of
a given judgment,
 To design a graphic display which accurately represents the subordinate
and super-ordinate relationship among concepts or ideas,
 To cite the evidence that led you to accept or reject an author‘s position on
an issue,
 To list the factors that were considered in assigning a final course grade.
May be the most remarkable cognitive skill of all, however, is this next one. This one is
remarkable because it allows strong critical thinkers to improve their own thinking. In a sense
this is critical thinking applied to itself. Because of that some people want to call this ‖metacognition,‖ meaning it raises thinking to another level. But ―another level‖ really does not fully
capture it, because at that next level up what self-regulation does is look back at all the
dimensions of critical thinking and double check itself. Self-regulation is like a recursive function
in mathematical terms, which means it can apply to everything, including itself. You can monitor
and correct an interpretation you offered. You can examine and correct an inference you have
drawn. You can review and reformulate one of your own explanations. You can even examine and
correct your ability to examine and correct yourself. How? It is as simple as stepping back and
saying to yourself, ―How am I doing? Have I missed anything important? Let me double check
before I go further.‖
6. The experts define self-regulation to mean ―self-consciously to monitor one‘s cognitive
activities, the elements used in those activities, and the results educed(developed; elicited;
inferred; deduced), particularly by applying skills in analysis, and evaluation to one‘s
own inferential judgments with a view toward questioning, confirming, validating, or
223
correcting either one‘s reasoning or one‘s results.‖ The two sub-skills here are selfexamination and self-correction. Examples? Easy –
 To examine your views on a controversial issue with sensitivity to the
possible influences of your personal biases or self-interest,
 To check yourself when listening to a speaker in order to be sure you are
understanding what the person is really saying without introducing your
own ideas,
 To monitor how well you seem to be understanding or comprehending what
you are reading or experiencing,
 To remind yourself to separate your personal opinions and assumptions
from those of the author of a passage or text,
 To double check yourself by recalculating the figures,
 To vary your reading speed and method mindful of the type of material and
your purpose for reading,
 To reconsider your interpretation or judgment in view of further analysis
of the facts of the case,
 To revise your answers in view of the errors you discovered in your work,
 To change your conclusion in view of the realization that you had
misjudged the importance of certain factors when coming to your earlier
decision.
224
Appendix A (Continued)
A chart showing the core critical thinking skills
Source: The APA Delphi Report (Facione, 1990)/(2013 Update)
Appendix A (Continued)
Critical Thinking Terms in Reading and Writing
Here are the critical thinking terms to help you in your reading and writing. Some of these terms
have been covered in the module chapters, and some of them are new but will also help you in
using your critical thinking skills.
Analysis: Analysis involves breaking down an idea and working out the meaning of the individual
parts and how they relate to the whole. For example if you were asked to analyze a paragraph or
a poem, you would go through each line or sentence and figure out what each individual part is
saying; then you'd look at the overall meaning and the connections between the parts. Think back
to the Thinking Critically opener for this chapter: The picture may not have been completely clear
until you carefully analyzed each part of the image and put together all the evidence.
225
Argument: In most college writing, you are making some type of arguments and presenting a
conclusion about a topic using reason and evidence to convince your readers of your point.
Arguments in writing can be casual and entertaining (such as arguing for the best place in town
to go for a first date), or they can be more formal and structured (such as arguing for the need for
a new science building on your campus).
Assumptions: An assumption is a belief or claim that you take for granted
or that society, particular people, or an author you are reading takes for granted without
providing or asking for evidence or proof to support the idea.. Almost everything you believe and
do is based on assumptions; for instance, you assume the sun will rise each morning and set each
evening. Some, however, are more individual assumptions that you take for granted but that not
everyone would agree with. It is important to learn to separate the assumptions that have a basis
in fact from ones that don't. For instance, if you based an argument for a new school playground
on the assumption that children like to play, that's a good assumption. However, if you based an
argument for building it on the assumption that the school has extra money to spend for a new
playground, you would have to research your assumption to make Sure it Is true. When reading
other people's writing, look carefully for the assumptions, the ideas they take for granted, and
consider whether these are an undeniable truth.
Bias: Bias is a particular viewpoint that you or an author has about an idea or a topic. All ideas
or opinions reflect a bias. Sometimes you (or an author) are conscious of the biases in your ideas,
and sometimes you are not. Having biases is not necessarily a bad thing (it is inevitable), but
when one's biases are founded on misinformation or unrealistic assumptions they can get in the
way of good critical thinking.
:
Conclusion: A conclusion is the end result of an argument. It is the main point you make in your
paper and should be the logical result of the reasons you provide to support your argument. For
example, if you had to write a paper on the subject of canceling or adding more funding to your
campus basketball team, your choice the opinion you reach on the debate would be your
conclusion, and you would back up your conclusion with reasons and support. When you read an
author's argument, you are looking for their conclusion about the topic they have chosen and how
well they have developed it using reasons, examples, and details as support.
Evaluation: Evaluation is looking at the strength of your reasoning, support, and conclusions (or
those of another writer) and how well those ideas are developed and explained. For example, if
you were writing an argument paper taking a stand on an issue such as gun control, you would
want to evaluate the arguments you put forth and how well you supported them with examples,
reasons, and details. Also, you would need to consider the counterarguments what people who
argue for a different stand might say against your conclusion on the issue and evaluate how well
those arguments are constructed.
226
Imply/implication: To imply means to hint that something is so, to say it indirectly. For instance,
if your aunt visits you and says, "My, aren't you looking filled out these days!" she may be
implying, or hinting, that you need to go on a diet.
Inference: Inference involves tapping into your ability to read between the lines and figure out,
or infer, what someone means based on clues in what they say or write. For instance, in the
example above, your aunt has implied that you are getting fat, and you, in receiving those clues
from her language, have inferred her meaning.
Interpretation: Interpretation involves decoding an idea so you understand its meaning. When
you interpret an author's idea, you decode it using your own words. You need to interpret and
understand an author's ideas before you can analyze their meanings and evaluate them.
Opinion: Your opinion is what you (or another writer) believe about an idea, question, or topic.
Opinion involves thinking about an idea or question and coming to your own conclusions about
it. An opinion is based on weighing information and deciding where you stand on a question.
Point of View: Point of view in critical thinking refers to the perspective you are coming from in
your reasoning and writing (or the perspective of the author you are reading). Be aware of your
own point of view and the biases, assumptions, and opinions that make up that point of view, and
be prepared to think of potential points of view that differ from yours (or from the views of the
author you are reading).
Purpose: The term purpose refers to the reason you are writing a piece in the first place. What
have you (or the author you are reading) set out to explain or prove to your readers? Sometimes
the purpose of your writing is directly stated, as in a thesis statement, and sometimes it is implied
by the arguments and reasons you provide throughout your writing.
Synthesis: Synthesis involves pulling together your ideas, and sometimes the ideas of others, in
order to make or support an argument. Often, in writing, synthesis involves pulling together ideas
from different authors that connect on a particular subject or argument to give a bigger picture.
For instance, if you were writing an essay that compared two or more readings on a similar
theme, you would synthesize the ideas that overlap to help develop your purpose in that piece of
writing—like putting pieces from different puzzles together to make a new image.(Sims, 2009).
227
Appendix A (Continued)
A Chart Showing the Elements of Reasoning and Universal Intellectual
Standards:
A Critical Thinker always considers the Elements of Reasoning
With sensitivity to Universal Intellectual Standards
( Clear > Accurate > Relevant > Deep > Broad )
Source : Foundation For Critical Thinking, 1996, Sonoma California in Sims, 2009: IG.4
Appendix A (continued)
Definitions of the Elements of Reasoning (Aspects of CT)
Point of View (Perspective): Human thought is relational and selective. It is impossible to
understand any person, event, or phenomenon from every vantage point simultaneously. Critical
thinking requires that this fact be taken into account when analyzing and assessing thinking. This
is not to say that human thought is incapable of truth and objectivity, but only that human truth,
objectivity, and insight is virtually always limited and partial, virtually never total and absolute.
228
The hard sciences are themselves a good example of this point, since qualitative realities are
systematically ignored in favour of quantifiable realities.
Purpose: The intention, aim, or end in view of a document, discussion, activity, relationship, etc.
Question or Problem: A matter, situation, or person that is perplexing or difficult to figure out,
handle, or resolve. Problems and questions can be divided into many types, including
monological (problems that can be solved by reasoning exclusively within one discipline, point of
view, or frame of reference) and multilogical (problems that can be analyzed and approached
from more than one, often from conflicting points of view or frames of reference).
Evidence: The data (facts, figures, or information) on which a judgment or conclusion might be
based or by which proof or probability might be established. Critical thinkers distinguish the
evidence or raw data upon which they have their interpretations or conclusions as something
given to them in experience, as something they directly observe in the world. As a result, they find
it difficult to see why any one might disagree with their conclusions.
Assumption: A statement accepted or supposed as true without proof or demonstration; an
unstated premise or belief. All human thought and experience is based on assumptions. Our
thought must begin with something we take to be true in a particular context. We are typically
unaware of what we assume and therefore rarely question our assumptions. Much of what is
wrong with human thought can be found in the uncritical or unexamined assumptions that
underlie it. For example, we often experience the world in such a way as to assume that we are
observing things just as they are, as though we were seeing the world without the filter of a point
of view. People will disagree with, of course, we recognize as having a point of view. One of the
key dispositions of critical thinking is the on-going sense that as humans we always think within a
perspective, that we virtually never experience things totally and absolutistically. There is a
connection, therefore, between thinking as to be aware of our assumptions and being
intellectually humble.
Concept: An idea or thought, especially a generalized idea of a thing or of a class of things.
Humans think within concepts or ideas. We can never achieve command over our thoughts unless
we learn how to achieve command over our concepts or ideas. Thus, we must learn how to
identify the concepts or ideas we are using, contrast them with alternative concepts or ideas, and
clarify what we include or exclude by means of them. For example, most people say they believe
strongly in democracy, but few can clarify with examples what that word does or does not imply.
Inference: An inference is a step of the mind, an intellectual act by which one concludes that
something is so in light of something else‘s being so, or seeming to be so. If you come at me with
a knife in your hand, I would probably infer that you mean to do me harm. Inferences can be
strong or weak, justified or unjustified. Inferences are based on assumptions.
Implication: A claim or truth which follows from other claims or truths. One of the most
important skills of critical thinking is the ability to distinguish between what is actually implied by
a statement or situation from what may be carelessly inferred by people. Critical thinkers try to
monitor their inferences to keep them in line with what is actually implied by what they know.
When speaking, critical thinkers try to use words that imply only what they can legitimately
229
justify. They recognize that there are established word usages which generate established
implications. To say of an act that it is murder, for example, is to imply that it is intentional and
unjustified.
Source: Paul (1995). Critical Thinking: How to Prepare Students for a Rapidly Changing World. Santa Rosa, CA:
Foundation for Critical Thinking
Appendix A (Continued)
Applying the Elements of Reasoning to Reading and
Writing and Questions to Ask
Below are the specific components of the critical thinking wheel broken down and paired with
some questions we can ask our students to make sure they are addressing these elements of
reasoning as they write and read (Sims, 2009).
1. Purpose of the Thinking, Goal. In an essay, a purpose is expressed in the thesis, and the
purpose should be clear and consistent from the beginning to the end of the essay.
When writing and reading, ask yourself, "What is my purpose or goal in this
assignment?" Or, if you are assessing another writer's work, ask, "What is the
author's purpose or goal?"
2. Question at Issue or Problem to be Solved. When you write, there should always be at least one
Question at Issue or Problem to be solved.
Ask yourself, "What am I trying to achieve through this assignment?" "What implied
or direct questions am I addressing?" "How will I answer or address this question or
problem?" Or, if you are assessing another writer's work, ask, "What problem or issue
is the author trying to resolve?"
3. Information. In an essay, provide information to support your reasoning and point of view. Draw
on experiences, data, evidence, or other material to support your reasoning.
Ask yourself, "What evidence or examples can I provide to back up my ideas?" Or, if
you are assessing another writer's work, ask, "What evidence or examples is the
author using to support his or her reasoning?"
4. Inferences, Interpretations, and Conclusions. Reasoning proceeds by steps in which you reason
as follows: "Because this is so, that also is so (or probably so)," or "Since this, therefore that." Any
errors in such inferences are possible sources of problems. Check to make sure your inferences, or
the inferences an author has made, are clear and logical.
Also, ask yourself, "Have I clearly interpreted the information and examples in my writing or
the writing of the author I have read?" "Did I show how I reached my conclusions (or did the
author show how he or she reached his or her conclusions)?"
5. Concepts. All reasoning is based on ideas or concepts. Any errors in the concepts or ideas are
possible sources of problems in your reasoning. Check carefully for errors in your reasoning or in the
230
reasoning of authors you read.
6. Assumptions. All reasoning must begin somewhere, and you must take some things for granted.
Any errors in your assumptions are possible sources of problems in your reasoning. You need to be
able to recognize and articulate your assumptions. They should be clear, justifiable, and consistent.
In the writing of others, double-check the accuracy and consistency of their assumptions.
7. Implications and Consequences. Be sure to trace all the implications and consequences of your
reasoning or the reasoning of the author you are reading.
8. Point of View or Frame of Reference. Whenever you reason, you must do so within a specific
point of view or frame of reference. Any errors in that point of view or frame of reference are
possible sources of problems in your reasoning. Your point of view may be too narrow, may be based
on false or misleading information, or may contain contradictions. Be sure your point of view (or
that of the author you are reading) is fair, clearly stated, and consistently adhered to (Sims, 2009).
Appendix A (Continued)
Helping Students Assess Their Critical Thinking
A critical thinker, according to Paul (1995), always considers to address the following elements
of reasoning to assess his/her thinking:
1. All reasoning has a PURPOSE.
 Take time to state your purpose clearly.
 Distinguish your purpose from related purposes.
 Check periodically to be sure you are still on target.
 Choose significant and realistic purposes.
2. All reasoning is an attempt to FIGURE something out, to settle some QUESTION, solve
some PROBLEM.
 Take time to state the question at issue clearly and precisely.
 Express the question in several ways to clarify its meaning and scope.
 Break the question into sub-questions.
 Identify if the question has one right answer, is a matter of mere opinion, or
requires reasoning from more than one point of view.
3. All reasoning is based on ASSUMPTIONS.
 Clearly identify your assumptions and determine whether they are justifiable.
 Consider how your assumptions are shaping your point of view.
4. All reasoning is done from some POINT OF VIEW.
 Identify your point of view.
 Seek other point of view and identify their strengths as well as weaknesses.
 Strive to be fair minded in evaluating all points of view.
231
5. All reasoning is based on DATA, INFORMATION, and EVIDENCE.
 Restrict your claims to those supported by the data you have.
 Search for information that opposes your position as well as information that
supports it.
 Make sure that all information used is clear, accurate, and relevant to the
question at issue.
 Make sure you have gathered sufficient information.
6. All reasoning is expressed through, and shaped by, CONCEPTS and IDEAS.
 Identify key concepts and explain them clearly.
 Consider alternative concepts or alternative definitions to concepts.
 Make sure you are using concepts with care and precision.
7. All reasoning contains INFERENCES or INTERPRETAIONS by which we draw
CONCLUSIONS and give meaning to data.
 Infer only what the evidence implies.
 Check inferences for their consistency with each other.
 Identify assumptions which lead you to your inferences.
8. All reasoning leads somewhere or has IMPLICATIONS and CONSEQUENCES.
 Trace the implications and consequences that follow from your reasoning.
 Search for negative as well as positive implications.
 Consider all possible consequences.
Paul, R. (1995). Critical Thinking: How to Prepare Students for a Rapidly Changing World. Santa
Rosa, CA: Foundation for Critical Thinking.
Appendix A (Continued)
Universal Intellectual Standards and Questions that can be used to Assess the
Quality of Critical Thinking
While there are a number of universal standards, the following are the most significant ones used
in the present study. The following standards, definitions, and examples are from the Foundation
for Critical Thinking, www.criticalthinking.org used to get students to apply the universal
standards and go deeper into their own thinking and analysis skills:
Clarity: Could you elaborate further on that point? Could you express that point in another way?
Could you give me an illustration? Could you give me an example? Clarity is a gateway
standard. If a statement is unclear, we cannot determine whether it is accurate or relevant. In
fact, we cannot tell anything about it because we don‘t yet know what it is saying. For example,
232
the question ―what can be done about the education system in America?‖ is unclear. In order to
adequately address the question, we would need to have a clearer understanding of what the
person asking the question is considering the ―problem‖ to be. A clearer question might be
―What can educators do to ensure that students learn the skills and abilities which help them
function successfully on the job and in their daily decision-making?‖
Accuracy: Is that really true? How could we check that? How could we find out if that is true? A
statement can be clear but not accurate, as in "Most dogs are over 300 pounds in weight."
Precision: Could you give more details? Could you be more specific? A statement can be both
clear and accurate, but not precise, as in "Jack is overweight."(We don't know how overweight
Jack is, one pound or 500 pounds).
Relevance: How is that connected to the question? How does that bear on the issue? A statement
can be clear, accurate, and precise, but not relevant to the question at issue. For example,
students often think that the amount of effort they put into a course should be used in raising their
course grade. Often, however, ―effort‖ does not measure the quality of student learning, and
when that is so, effort is irrelevant to their appropriate grade.
Depth: How does your answer address the complexities in the question? How are you taking into
account the problems in the question? Is that dealing with the most significant factors? A statement
can be clear, accurate, precise, and relevant, but superficial (that is, lack depth). For example, the
statement "Just say No" which is often used to discourage children and teens from using drugs, is
clear, accurate, precise, and relevant. Nevertheless, it lacks depth because it treats an extremely
complex issue, the pervasive problem of drug use among young people, superficially. It fails to deal
with the complexities of the issue.
Breadth: Do we need to consider another point of view? Is there another way to look at the question?
What would this look like from a conservative standpoint? What would this look like from the point of
view of…..? A line of reasoning may be clear, accurate, precise, relevant, and deep, but lack breadth
(as in an argument from either the conservative or liberal standpoints which gets deeply into an issue,
but only recognizes the insights of one side of the question.)
Logic: Does this really make sense? Does that follow from what you said? How does that follow? But
before you implied this and now you are saying that, I don‘t see how both can be true. When we think,
we bring a variety of thoughts together into some order. When the combination of thoughts is mutually
supportive and makes sense, the thinking is logical. When the combination of thoughts is not mutually
supporting (or is contradictory in some sense), or does not ―make sense,‖ then the combination is ―not
logical.‖(Sims, 2009)
233
Appendix A (Continued)
THE CRITICAL THINKING CHECKLIST
The critical thinking skills defined in this chapter can help you get into the habit of analyzing and
evaluating the ideas and techniques you and other writers use to present arguments. Throughout
this course material, you will see critical thinking questions based on the concepts covered here.
Be sure to use the general Critical Thinking Checklist below to evaluate your critical thinking
process or the process of another writer.
1. What is the purpose of this piece of writing? Is it clear?
______________________________________________________________________
2. What ideas and background information are provided to support the purpose of this
piece of writing?
______________________________________________________________________
3
4
5
6
7
What evidence and examples are used to explain and develop the ideas that support the
argument made in this piece of writing? Are the evidence and examples provided
sufficient?
________________________________________________________________________
Are there unfounded assumptions or unreasonable biases?
_________________________________________________________________________
Are all of the conclusions, implications, and consequences of the argument (the results of
the argument taken to their furthest extreme) considered?
________________________________________________________________________
Is the point of view clear and consistent, and have other points of view been considered?
________________________________________________________________________
Using these critical thinking tools (mentioned above), analyze the overall structure of this
essay and the strength of the author's argument, ideas, and support. Was the author
successful in accomplishing the purpose? Why or why not?
_________________________________________________________________________
Appendix A (Continued)
Definitions of the Aspects of Critical Thinking Dispositions(Traits)
The APA Delphi Study Consensus definition of the seven aspects (or dispositional characteristics)
of the overall disposition toward critical thinking according to Facione (1997), Insight
Assessment 2oo6 accessed on-line @ http://www.insightassessment.com/test-cctdi2.html, are as
follows:
Truth-Seeking: truth-seeking is the habit of always desiring the best possible understanding of
any given situation; it is following reasons and evidence where ever they may lead, even if they
lead one to question cherished beliefs. Truth-seekers ask hard, sometimes even frightening
234
questions; they do not ignore relevant details; they strive not to let bias or preconception color
their search for knowledge and truth. The opposite of truth-seeking is bias which ignores good
reasons and relevant evidence in order not to have to face difficult ideas.
Open-Mindedness: Open-mindedness is the tendency to allow others to voice views with which
one may not agree. Open-minded people act with tolerance toward the opinions of others,
knowing that often we all hold beliefs which make sense only from our own perspectives. Openmindedness, as used here, is important for harmony in a pluralistic and complex society where
people approach issues from different religious, political, social, family, cultural, and personal
backgrounds. The opposite of open-mindedness is closed-mindedness and intolerance for the
ideas of others.
Analyticity: Analyticity is the tendency to be alert to what happens next. This is the habit striving
to anticipate both the good and the bad potential consequences or outcomes of situations,
choices, proposals, and plans. The opposite of analyticity is being heedless of consequences, not
attending to what happens next when one makes choices or accepts ideas uncritically.
Systematicity: Systematicity is the tendency or habit of striving to approach problems in a
disciplined, orderly, and systematic way. The habit of being disorganized is the opposite
characteristic to systematicity. The person who is strong in systematicity may or may not actually
know or use a given strategy or any particular pattern in problem solving, but they have the
mental desire and tendency to approach questions and issues in such an organized way.
Critical Thinking Self-Confidence: Is the tendency to trust the use of reason and reflective
thinking to solve problems is reasoning self-confidence. This habit can apply to individuals or to
groups; as can the other dispositional characteristics measured by the CCTDI. We as a family,
team, office, community, or society can have the habit of being trustful of reasoned judgment as
the means of solving our problems and reaching our goals. The opposite is the tendency to be
mistrustful of reason, to consistently devalue or be hostile to the use of careful reason and
reflection as a means to solving problems or discovering what to do or what to believe
Critical Thinking Inquisitiveness: Inquisitiveness is intellectual curiosity. It is the tendency to
want to know things, even if they are not immediately or obviously useful at the moment. It is
being curious and eager to acquire new knowledge and to learn the explanations for things even
when the applications of that new learning is not immediately apparent. The opposite of
inquisitiveness is indifference.
Cognitive Maturity (or Maturity of Judgment): Cognitive maturity is the tendency to see
problems as complex, rather than black and white. It is the habit of making a judgment in a timely
way, not prematurely, and not with undue delay. It is the tendency of standing firm in one‘s
judgment when there is reason to do so, but changing one‘s mind when that is the appropriate
thing to do. It is prudence in making, suspending, or revising judgment. It is being aware that
multiple solutions may be acceptable while appreciating the need to reach closure in certain
circumstances even in the absence complete knowledge. The opposite, cognitive maturity, is
characterized by being imprudent, black-and-white thinking, failing to come to closure in a timely
235
way, stubbornly refusing to change one‘s mind when reasons and evidence would indicate one is
mistaken, or foolishly revising one‘s opinions willy-nilly without substantial reason for doing so.
Appendix A (Continued)
A Chart Showing the Relationship between Elements, Standards, Traits
(Dispositions), and Critical Thinking Abilities
(Richard Paul’s Model for Critical Thinking)
TRAITS
Independent Thinking
Intellectual Empathy
Intellectual Humility
Intellectual Courage
Intellectual Integrity
Intellectual Perseverance
Faith in Reason
Intellectual Curiosity
Intellectual Civility
Intellectual Responsibility
Process
Identifying
Analyzing
Synthesizing
Evaluating
Reviewing
Considering
Reasoning
ABILITIES
Object
Purposes
Problems
Interpretations
Concepts
Assumptions
Points of view
Standard
Clearly
Accurately
Precisely
Deeply
Thoughtfully
Fairly
Source : A Model for Critical Thinking (Paul, 1996)
236
Appendix A (continued)
Some Common Logical/or Reasoning Fallacies
Here are some of the most common logical fallacies, or errors in reasoning, that students commit
in argument essays (they are also common in speeches and debates). Check to make sure you
haven‘t damaged your credibility by including any of these fallacies in your paper (Sims, 2009).
Ad hominem fallacy: Ad hominem is a Latin phrase that means ―to the man‖ and involves
attacking a specific person or group of people attached to an issue or point of view that
opposes one‘s own position instead of arguing against their claims, reasoning, or evidence.
Example: Those tree-hugging hippies should stop interfering with progress.
Post hoc fallacy: Post hoc is a Latin phrase that means ―after this.‖ This fallacy involves
thinking that because one event happened first it is the cause of another event that followed
it. It is an unsupported claim that something is the result of something else. One should not
make a cause/effect analysis without providing evidence to support the connection between
two events.
Example: I decided to wear my striped shirt the day I aced my chemistry exam, so if I
wear it again today, I should do great in my English test.
Hasty generalization fallacy: In this fallacy, the writer jumps to a conclusion without providing
the evidence and reasoning that led to that conclusion.
Example: I‘m certain the students‘ lack of motivation is related to changes in modern
music
Begging the question fallacy: This fallacy involves stating and repeating claims but never giving
support or evidence to develop them.
Example: Good parenting would have prevented all of these social problems. All of our
society‘s major problems are a direct result of bad parenting.
Equivocation: Equivocation means using vague words or phrases that mislead the reader. A
writer may also use euphemisms (words or phrases used to soften the effect of a more direct
word) to avoid addressing the severity of an issue or to soften harsher truths. For instance,
the words ―passed away‖ or ―moved on‖ are euphemisms for ―dead,‖ and ―victims of
friendly fire‖ is a euphemism for soldiers killed by their own side.
Example: There were several casualties as a result of friendly fire.
Red Herring fallacy: This fallacy occurs when a writer uses details, examples, or other language
that distract the reader from the real argument.
Example: I hear that many soldiers suffer from depression after returning home from Iraq.
Has anyone looked into why these soldiers signed up for the military in the first place?
False dilemma fallacy: This fallacy is also known as the either/or fallacy. The writer presents
only two sides to a complex issue that may have many side. ―You‘re either with us or against
us‖ is a classic false dilemma: One may actually be somewhere in the middle.
Example: If you don‘t support our request for more library funding then you are anti-student
success.
237
Appendix B
A SEQUENCE OF CRITICAL THINKING TASKS (designed based on Numrich’s criteria)
TABLE 1: OBSERVING TASKS (focus on the students’ world/pre-text tasks)
Prompt
Skills
Target CT
Practiced
1. Look at the picture above. On the first glance, what do
Looking
Listening
you think this image is? What do you see?
2. Now take a closer look, paying attention to each detail
of the photograph. What is the overall reaction to this image?
Noticing
Naming
3. Draw a picture of the place that always impresses you to
stay or live. Describe it to your friend why and how much
you are interested in it.
4. Is the photographer appealing to logos (logic), or pathos( emotion ),
or both? What do you know?
5. Is there a particular social or historical context for this picture?
If so, what effect does that have on your reaction to or understanding
of the message?
6. What thesis or argument is conveyed through the image?
TABLE 2: IDENTIFYING ASSUMPTIONS TASKS (focus on the students’ world - pre-text tasks)
Prompt
1. Look again at the picture(s) you may have seen during
observing task . Working in small groups or as a whole class,
share what each of you discovered in the picture. Why do
Target CT Skills
Practiced
Sharing Backgrounds
Expressing opinions
Clarifying values
you think pictures such as this one is created?
2. Have you ever……………? Share your backgrounds
on the topic/text.
3. Why do you think………? Express your opinions on the text.
4. What are the challenges or advantages of…..? Clarify your
current thinking or values on the topic.
5. What do you know about……? Share your ideas of it.
TABLE 3: UNDERSTANDING AND ORGANIZING TASKS (focus on the text)
Prompt
1. Look again at an image you may have seen during
the observing task. Demonstrate your comprehension of it.
2. What are the main ideas and details in the story?
Identify and put them in order from first to last. Classify
Target CT Skills
Practiced
Interpreting
Summarizing
Distinguishing details
Ordering
238
or categorize the details, and compare and contrast the
information you have found in the text.
Comparing and contrasting
Explaining cause and effect
3. Look more closely at the text. Tell your partner/or group
what you remember from the story. Are there relationships
of ideas, such as sequences of events, similarities and differences,
and cause and effect
4. Write a short, one paragraph summary of the main ideas
and details in the story you have read.
TABLE 4: INTERPRETING TASKS (focus on the text)
Prompt
1. Who is the writer?
2. Who do you think the main intended audience is for
this article,________ or________? How do you know?
Target CT Skills
Practiced
Making inferences
Interpreting meaning
Hypothesizing
Theorizing of the evidence
3. What is the thesis? Is the quality and quantity of the
support for the thesis adequate?
4. What did the writer mean when he/she said
……..? How did he/she say it?
5. What is the organizational pattern? Is it appropriate? Are
there relationships among ideas, and main ideas and details?
6. What is the writer’s tone? Is the tone appropriate for the
subject and the audience?
7. Does the writer have biases? How do you know?
Table 5: INQUIRING FURTHER TASKS (focus beyond the text)
Prompt
Target CT Skills
Practiced
1. Read another story, perhaps close to this ( the
Surveying the public
Interviewing a specialist
Researching
story you have already read). Tell your partner/
group/class about it.
2. Interview a classmate about…….. Write a narrative/
descriptive/or an argumentative essay of that experience.
3. Research and report on the benefits or drawbacks of
________ versus _______ in writing( essay).
4. Study or consult expert(s) on the topic, perhaps another from this one.
Then report it in writing.
239
TABLE 6: ANALYZING AND EVALUATING TASKS (focus beyond the text)
Prompt
1. What knowledge do you already have about the
story? Do you have personal or reading experience
that supports or does not support the ideas in this
story/material?
2. Read the article summarizing the story’s main ideas.
Based on what you know or experienced about it,
write a short essay of 200 – 250 words.
Target CT Skills
Practiced
Synthesizing information
Critiquing
Reflecting on restated ideas
Making logical conclusions
Re-evaluating assumptions
3. Analyze the rhetorical strategies (e.g., purpose,
audience, genre, angle of vision, and use of
reasoning and evidence) used in the text.
4. Is the text unified, coherent, and well organized?
5. Do you agree or disagree with the writer’s ideas? Why or why not?
6. Compare and contrast the writer’s experience with the topic
with that of yours or with the person you interviewed
(inquiring further task above) in the same topic. Then write
3-5 Points/ tips for people who want to experience about it.
7. Look at the strength of your reasoning, support, and conclusions
(or those of another writers). How well are those ideas developed and explained?
TABLE 7: MAKING DECISIONS TASKS (focus beyond the text)
Prompt
1.Identify a problem in/at your (home, dormitory,
University library, college/department, community, or work place
(if there is one), and write an essay in which you try to persuade the
concerned body to recognize the problem and remedy it.
Target CT Skills
Practiced
Proposing solutions
Problem solving
taking action
Participating
2. Talk with your partner/ family/ advisor about your low
achievements in English courses. Make an achievable goal, and
pursue it. Then, report it in writing to a partner or advisor.
3. Learn about students’ rights in your university campus, community,
etc. and advocate for change where it is needed.
A Task designed for EG students based on Paul’s Model for Critical Thinking (1996) and Numrich’s Sequence of
Critical Thinking Tasks in John Beaumont (2010). A Sequence of Critical Thinking Tasks.
240
Appendix C
The General Essay Checklist
Overall Depth of Critical Thinking, Ideas, and Arguments
1. Is the overall thinking and development of ideas and arguments done well in this draft? Has the author considered his or her
assumptions and the implications of his or her thesis? Can the essay go deeper in its analysis? Explain in the lines below:
________________________________________________________________________________________________________
_____
the following:
Yes
No
Needs Work
Title
1. Did the author provide a title for the essay that is interesting and provides a clue to the essay‟s topic or purpose?
Circle one of the following:
Yes
No
Needs Work
3. Is the title formatted correctly (centered, no bold, no quotation marks, no underline, no larger font, and colon used between title
and subtitle if there is both)
Circle one of the following :
Yes
No
Needs Work
Introductory Paragraph(s)
4. Does the essay have an interesting opening line and attention grabber?
Circle one of the following:
Yes
No
Needs Work
5. Does the author a general background and setup for the topic and purpose?
Circle one of the following :
Yes
No
Needs Work
6. Does the essay a thesis statement that explains the purpose for the essay, how the paper will develop that purpose, and what
conclusion the writer reached about the topic (the so what)?
Circle one of the following :
Yes
No
Needs Work
Body Paragraphs
7. Does the author include an analytical topic sentence for each body paragraph that follows the plan for development set out in the
introduction?
Circle one of the following :
Yes
No
Needs Work
8. Does the author provide adequate support for each topic sentence (examples, details, and analysis)? Review the Critical
Thinking Skills in Chapter 1 for suggesting deeper analysis.
Circle one of the following :
Yes
No
Needs Work
9. Does the author have a concluding sentence for each body paragraph that reiterates the topic sentence and goal for that body
paragraph?
Circle one of the following :
Yes
No
Needs Work
10. Does the author include a transition between each body paragraph (either at the end of one or the beginning of the next ) that
helps lead the reader smoothly from one support idea to the next?
241
Circle one of the following :
Yes
No
Needs Work
Concluding Paragraph
11. Does the concluding paragraph sum up the main ideas and purpose of the essay and re-emphasize the thesis statement?
Circle one of the following :
Yes
No
Needs work
12. Does the concluding paragraph avoid adding any new ideas or contradictions?
Circle one of the following :
Yes
No
Needs Work
Grammar / Editing / Format
List the grammar errors you think were evident in this draft. Focus on repeated error patterns that the author should look for as he
or she revises:
Other comments for the author
_______________________________________________________________________________________________
(Sims, 2009: IG-13-15)
Appendix D
Self-editing Sheet
Format:
My essay is correctly formatted (title centered, first line of every paragraph indented, margins on both sides, double-spaced.)
Yes
No
Mechanics:
I checked punctuation, capitalization, and spelling.
Yes
No
Content and Organization:
My essay has all three parts: introduction, body, and conclusion.
Yes
No
Introduction:
Type of introduction (funnel, historical background, surprising statistic, dramatics
story, etc.)
Yes
No
The introduction ends with my thesis statement.
Yes
No
Body:
The body has ___________ paragraphs.
The topics of the body paragraphs are as follows:
1.________________ 2. _________________ 3. ________________ 4. ________________
( if there are more or fewer paragraphs, add or delete lines. )
Unity:
Each paragraph discusses only one main idea, and there are no sentences that are “ off the topic.”
242
Yes
No
Coherence :
Each paragraph has coherence. My essay flows smoothly from beginning to end.
Yes
NO
I repeat key nouns
I use transition signals to show relationship among ideas.
I use transitions to link paragraphs.
Yes
Yes
Yes
No
No
No
Conclusion :
The conclusion (A) summarizes the main points or (B) Paraphrases the thesis statements. ( Circle One. )
Grammar and Sentence Structure
I checked my essay for ____________ errors.
( verb tense, article, etc. )
I checked my essay for ____________errors.
I checked my essay for ____________ errors.
A
B
Number found and corrected
_____________
_____________
______________
( Adapted from Oshima and Hogue, 2006 : 321 )
Appendix E
Peer-editing Sheet
1. What kind of introduction does this essay have? ( funnel, dramatic, etc.)_______________
How many sentences does it contain?________________________________
Does it capture your interest?
Yes
Where is the thesis statement placed?_______________________________
No
2. How many paragraphs are there in the body? Number:_______________________
The topics of the body paragraphs are as :
1.__________________ 2._________________ 3._________________ 4.________________
( if there are more or fewer paragraphs, add or delete lines. )
3. What kind of supporting details does the writer use in each body paragraph?
1.______________ 2._____________ 3.______________ 4._______________
4. Check each paragraph for unity. Is any sentence unnecessary or “off the topic?” Yes
No
If your answer is yes, write a comment about it(them)
________________________________________________________________________________________________________
__________________________________________________________________________________________________
5.Check each paragraph for coherence. Does each one flow smoothly from beginning to end?
Yes
N0
What key nouns are repeated?_____________________________
What transition signals can you find? _____________________________
6. What expressions does the writer use to link paragraphs? If there is none, write none( if there are more or fewer paragraphs,
add or delete lines. )
To introduce the first body paragraph_____________________________________
Between paragraphs 2 and 3 ___________________________________________
243
Between paragraphs 3 and 4 ___________________________________________
Between paragraphs 4 and 5 ___________________________________________
To introduce the conclusion ____________________________________________
7. What kind of introduction does this essay have? A summary of the main points or a paragraph of the thesis statement?
_______________________________________________________
Does the writer make a final comment?
Yes
No
What is it? _____________________________________________________________
Is this an effective ending ( one that you will remember )?
Yes
No
8. In your opinion, what is the best feature of this essay? In other words, what is the writer‟s best writing
skill?____________________________________________________________________________________________________
_
( Adapted from Oshima and Hogue, 2006 : 322 )
Appendix F
Researcher/Instructor Developed Academic Writing Skills Essay Test (Pre-and-Posttest)
Directions: Write a short argumentative essay of 5 paragraphs (about 350-400 words) arguing for or
against one of the two theses (topics) given below. Consider the following in your writing: Focus and
purpose; Depth of thought; Thesis; Reasoning; Organization; Voice; Grammar and Vocabulary;
Mechanics and Presentation. Be aware that your essay will be evaluated in terms of these aspects of
writing.
Theses:
1. Preparatory schools in Ethiopia are adequately preparing students for college/university
academic writing skills.
2. Preparatory schools in Ethiopia are not adequately preparing students for college/university
academic writing skills.
244
Appendix F (Cont’d)
Rubric for Evaluating Written Argumentation: Critical Thinking Resources
Aspects of
Sound &
Effective
Writing
Purpose and
Focus
Depth of
Thought
Thesis
Reasoning
Highly Developed
Developed
Underdeveloped
Substandard
The writer has made
insightful and mature
decisions about focus,
organization & content
to communicate clearly
&
effectively.
The
purpose & focus of the
writing are clear to the
reader
&
the
organization & content
are
well
chosen,
sophisticated
and/or
persuasive.
The writer has made
good decisions about
focus, content &
organization
to
communicate clearly
& effectively. The
purpose and focus of
the writing are clear
to the reader & the
organization
&
content achieve the
purpose as well.
The writer’s decisions
about organization, or
content
sometimes
interfere with clear, and
effective communication
the purpose of the
writing is not fully
achieved.
The writer’s decisions
about
focus,
organization,
or
content interfere with
communication. The
purpose of the writing
is not achieved.
The
information
presented reveals the
writer’s assimilation &
understanding of the
material. The writer is
convincingly aware of
implications beyond the
immediate subject.
The
information
presented reveals the
writer
appreciates
and understands the
material. The writer
seems
aware
of
implications beyond
the
immediate
subject.
The
information
presented reveals that
the writer has only
partially assimilated or
understood the material.
The writer shows some
awareness
of
implications beyond the
immediate subject.
Has a highly developed,
defendable
assertion
that provides focus and
direction to the essay.
Uses sources to support,
extend, and inform, but
not substitute for the
writer’s
own
development of ideas.
Has
a
clear
recognizable
assertion
that
provides focus &
direction
to
the
essay. Uses sources
to
support
and
inform writer’s own
development
of
ideas.
Offers
solid
reasoning. Most key
assumptions
are
recognized, or made
explicit.
Most
inferences
are
accurate.
Most
examples are on
point.
Uses relevant sources
but lacks variety of
sources
and/or
the
skillful combination of
sources necessary to
support
a
central
assertion.
The
information
presented reveals the
writer’s
lack
of
assimilation
&
understanding of the
material. The writer’s
assertions
lack
awareness
of
implications beyond
the immediate subject.
Lacks
a
clear,
recognizable assertion
and/or lacks adequate
sources.
Offers some supporting
evidence.
The
case
includes some examples
that are too general, not
interpreted,
or
not
clearly
relevant
to
thesis.
Offers
simplistic,
underdeveloped,
fallacious, circular, or
irrelevant arguments,
includes
exaggerations, faulty
reasoning,
factual
errors,
biased
statements,
etc(see
Holistic
Critical
Thinking
Scoring
Rubric.)
Sequencing of ideas
within paragraphs &
Sentence
structure
and/or word choice
Ineffective
sentence
structure, word choice,
Substantial & wellreasoned development
of
ideas.
all
key
assumptions are made
explicit.
Credible
evidence is germane,
and accurately analyzed
&fair
mindedly
interpreted.
Displays
strong critical thinking
skills & habits of mind
(see Holistic Critical
Thinking Scoring Rubric.)
Sequencing of ideas
within paragraphs &
245
Organization
Voice
Grammar and
Vocabulary
Mechanics
and
presentation
transitions
between
paragraphs
flows
smoothly & coherently
throughout the paper.
The writer shows clear
effort to assist the
reader in following the
logic of the ideas
expressed.
transitions between
paragraphs make the
writer’s
points
coherent & easy to
follow.
sometimes
interfere
with
clarity
and
coherence. Needs to
improve sequencing of
ideas within paragraphs
and transitions b/n
paragraphs to make the
writing easy to follow.
transitions
and/or
sequencing of ideas
make reading and
understanding
difficult.
The writer’s tone or
control of language
consistently reflects a
confident or authorcentral
“voice”
or
“personality.” The writer
shows clear discernment
of
&
effective
engagement of intended
audience.
The writer’s tone or
control of language
generally reflects a
confident
or
authoritative central
“voice”
or
“personality.”
The
writer
shows
appropriate
and
consistent awareness
of intended audience.
A central ”voice” or
“personality” is evident,
though inconsistent in
minor ways. The writer
shows
little
or
inconsistent awareness
of a particular audience.
The writer’s tone or
general control of
language is so lacking
in consistency that
little central “voice” or
“personality”
is
evident. The writer
lacks awareness of a
particular audience.
Sentence structure is
complex & powerful.
The writer has used
vivid,
purposefully
crafted
&
varied
sentence
styles
&
lengths. The writer
displays a broad range
of
vocabulary
with
effective, accurate, and
contextually appropriate
word usage.
Sentences
are
effective & varied in
style
&
length.
Grammar & usage
errors are minimal
and do not distract
the reader from
understanding
the
intended meaning.
The writer displays a
satisfactory range of
vocabulary
&
accurate
&
appropriate
word
usage.
Sentences show errors in
structure. the writer
uses limited variety in
sentence
style
and
length.
The
writer
displays a limited range
of vocabulary. Errors of
diction and usage are
evident but do not
interfere
significantly
with readability.
Sentence structure is
simple, with practically
no variety in sentence
style
and
length.
Frequent errors in
sentence
structure
interfere
with
readability. The writer
displays an extremely
limited
vocabulary.
Diction and syntax
errors
make
communication
confusing
or
unintelligible.
Written response is
virtually
free
of
punctuation, spelling, or
capitalization errors. The
writer
utilizes
an
appropriate & attractive
format. Presentation &
style (citations) for the
assignment.
Written
response
contains
only
occasional
punctuation, spelling,
or
capitalization
errors. The writer
utilizes
an
appropriate format,
presentation,
and
style (citations) for
the assignment.
Written
response
contains
many
punctuations, spelling,
or capitalization errors.
errors interfere with
meanings
in
some
places. The writer makes
some errors in format,
presentation, and style
(citations)
for
the
assignment
Written
response
contains many severe
punctuation, spelling,
or capitalization errors
that
hide
communication. The
writer
utilizes
inappropriate format,
presentation, or style
(citations) for the
assignment or the
formatting is absent.
Source: Gittens, C.A. (2011) and Measured Reasons LLC, Santa Clara, CA. Critical Thinking Resources.
Offered to customers of Insight Assessment as one of the classroom support material for education enhancement
and improvement projects in critical thinking: www.insightassessment.com
246
Appendix G
The Moorburg Letter
230 Sycamore Street
Moorburg
April 10
Directions: Read the following letter by Mr. Raywift. Evaluate the author‟s arguments in each
paragraph of the letter, then generate a written argument in response to Mr. Raywift‟s letter
paragraph by paragraph. Each paragraph in the letter exhibits sound reasoning, which you are
expected to recognize. Use your critical thinking skills to discuss and critique the letter. Time
allotted: 40 minutes.
Dear Editor:
Overnight parking on all streets in Moorburg should be eliminated. To achieve
this
goal, parking should be prohibited from 2 a.m. to 6 a.m. There are a
number of reasons why any intelligent citizen should agree.
1. For one thing, to park overnight is to have a garage in the streets. Now it is illegal
for anyone to have a garage in the city streets. Clearly, then, it should be against the law
to park overnight in the streets.
2.Three important streets, Lincoln Avenue, Marquand Avenue, and West Main Street,
are very narrow. With cars parked on the streets, there really isn't room for the heavy
traffic that passes over them in the afternoon rush hour. When driving home in the
afternoon after work, it takes me thirty-five minutes to make a trip that takes ten
minutes during the uncrowded time. If there were no cars parked on the side of these
streets, they could handle considerably more traffic.
3.Traffic on some streets is also bad in the morning when factory workers are on their
way to the 6 a.m. shift. If there were no cars parked on these streets between 2 a.m. and
6 a.m., then there would be more room for this traffic.
4.Furthermore, there can be no doubt that, in general, overnight parking on the streets is
undesirable. It is definitely bad and should be opposed.
5.If parking is prohibited from 2 a.m. to 6 a.m., then accidents between parked and
moving vehicles will be nearly eliminated during this period. All intelligent citizens
would regard the near elimination of accidents in any period as highly desirable. So, we
should be in favor of prohibiting parking from 2 a.m. to 6 a.m.
6.Last month, the Chief of Police, Burgess Jones, ran an experiment which proves that
parking should be prohibited from 2 a.m. to 6 a.m. On one of our busiest streets,
Marquand Avenue, he placed experimental signs for one day. The signs prohibited
parking from 2 a.m. to 6 a.m. During the four-hour period, there was not one accident
on Marquand. Everyone knows, of course, that there have been over four hundred
accidents on Marquand during the past year.
7.The opponents of my suggestions have said that conditions are safe enough now.
These people don't know what "safe" really means. Conditions are not safe if there's
even the slightest possible chance for an accident. That's what "safe" means. So,
247
conditions are not safe the way they are now.
8.Finally, let me point out that the Director of the National Traffic Safety Council,
Kenneth O. Taylor, has strongly recommended that overnight street parking be
prevented on busy streets in cities the size of Moorburg. The National Association of
Police Chiefs has made the same recommendation. Both suggest that prohibiting
parking from 2 a.m. to 6 a.m. is the best way to prevent overnight parking.
9.I invite those who disagree, as well as those who agree with me, to react to my letter
through the editor of this paper. Let's get this issue out in the open.
Sincerely,
Robert R. Raywift
Appendix G (Continued)
Criteria and Scoring Sheet for the Ennis – Weir
Student’s Name________________ Total Score____ Graded By_________
. Credit Given (maximum is 3 points per line except #9)
. See the Manual for interpretation and qualification of these criteria
Table 2. Critical Thinking Skills Assessed in the Ennis-Weir Critical Thinking Essay Test
Critical Thinking Skill (s) Assessed
Paragrap
h #
1
2
3
4
5
6
7
8
9
Possible
Credit
Recognition of misuse of analogy, and/or recognition shift in meaning, and/or claim that
incorrect definition has been stipulated.
Recognition of irrelevance and avoiding introducing irrelevant material.
-1 to 3
Recognition that Paragraph Three is Ok. (Neglecting the busy-streets limitation is not
penalized here)A .
Recognition of circularity, and/or recognition that no reason is offered. (Subtract one point
from credit for interpreting ―undesirable‖ as ―not desired.‖)
Recognition that there may be other ways of preventing accidents, and/or recognition that
other things might be more desirable, and/or recognition that there probably is not much
traffic at that time, and/or recognition that other types of accidents are unaffected, and/or
recognition that no evidence has been given that such accidents occur. (Other
possibilities)
Recognition of poor experiment, lack of control, or inadequate sampling, and/or ―only one
case,‖ and/or ―post hoc fallacy.‖ (Other possible explanation‖
Recognition of winning argument by definition, and/or recognition that a word has been
made useless for empirical assertion, and/or claim that an incorrect definition has been
asserted.
Recognition that Paragraph Eight is Ok. (Neglecting the busy-streets limitation is not
penalized here.)A C
One point for just condemning the overall argument; another point for reviewing or
-1 to 3
-1 to 3
-1 to 3
-1 to 3
-1 to 3
-1 to 3
-1 to 3
-1 to 5
248
summarizing the responses to the other paragraphs in some reasonable way; two points for
recognizing(anywhere) the error of concluding about all streets on the basis of reasons
that relate only to busy streets;A and one point for noting(anywhere) that Raywift has
attempted to push people around with his emotive language. Total possible: 5 points.
Total
-9 to 29
A score of -1, 0, 1, 2, or 3 will be given for each of the first eight numbered paragraphs; B
-1 judges incorrectly (good or bad)C
-1 shows bad judgment in justifying
0 makes no response D
+1 judges correctly (good or bad), but does not justify C
+2 justifies semi-adequately
+3 justifies adequately
For Paragraph Nine, the range is -1 to +5,
A
Do not penalize for failure to note busy-streets limitation in Paragraphs Three or Eight. If it is not
noted
at least somewhere, do not give the allotted 2points in Paragraph Nine. If the limitation is noted in
Paragraphs Three or Eight, credit should be granted at Paragraph Nine.
B
These criteria are guidelines. The grader should use judgment in awarding points, subtracting for
unspecified errors and adding for unspecified insights.
C
Sometimes, something judged one way here will be judged another way by the test taker, and so well
defended that a positive score (sometimes even +3) is warranted. The grader must use judgment. For
example, a good argument could be mounted against Paragraph Eight.
D
If the examinee makes a response, but the argument of the paragraph is not judged either good or
bad and no reasons are given, count it as‖ no response.‖
249
Appendix H
ADDIS ABABA UNIVERSITY
COLLEGE OF HUMANITIES, LANGUAGE STUDIES, JOURNALISM
AND COMMUNICATION, GRADUATE PROGRAMME, DEPARTMENT
OF FOREIGN LANGUAGES AND LITERATURE
( QUESTIONNAIRE FOR STUDENTS )
The purpose of this questionnaire is to obtain relevant information about your dispositions (= or
attitudes/internal motivation) and how much and how often you use them in learning and other
real-life situations at schools and outside.
Thus, you‘re kindly requested to give your responses honestly. Your responses will be used only
for research purpose.
Thank you for your co-operation.
Questionnaire: CTD
DIRECTIONS
Indicate how much you agree or disagree with each numbered statement by filling in the
appropriate place provided following each statement. Read the two examples first.
EXAMPLE A: The best things in life are free.
EXAMPLE B: I‟m always doing more than my share of the work.
If your response to EXAMPLE A is STRONGLY DISAGREE, circle 0 (zero) under SD.
If your response to EXAMPLE B is LESS STRONGLY AGREE, circle #4 (four) under LSA.
SA = Strongly Agree
LSA = Less Strongly Agree
A = Agree
1
2
3
4
5
6
7
8
9
10
U = Undecided
D = Disagree
SD = Strongly Disagree (has ‘no’ point).
It‟s never easy to decide between competing points of view.
Even if the evidence is against me, I will hold firm to my beliefs.
Many questions are just too frightening to ask.
I‟m proud that I can think with great precision.
If there are four reasons in favor and one against, I would go with the
four.
It concerns me that I might have biases of which I am not aware.
Men and women are equally logical.
You are not entitled to your opinion if you are obviously mistaken.
Everyone always argues from their own self-interest, including me.
Being open-minded about different world views is less important than
SA
LSA
A
U
D
SD
5
5
5
5
5
4
4
4
4
4
3
3
3
3
3
2
2
2
2
2
1
1
1
1
1
0
0
0
0
0
5
5
5
5
5
4
4
4
4
4
3
3
3
3
3
2
2
2
2
2
1
1
1
1
1
0
0
0
0
0
250
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
people think.
You could describe me as logical.
Getting a clear idea about the problem at hand is the first priority.
There is no way to know whether one solution is better than another.
Learn everything you can, you never know when it could come in handy.
I must have grounds for all my beliefs.
My trouble is that I am easily distracted.
I can talk about my problems for hours and hours without solving
anything.
I am good at developing orderly plans to address complex problems.
I am known for approaching complex problems in an orderly way.
It is easy for me to organize my thoughts.
My peers call on me to make judgments because I decide things fairly.
I pride myself on coming up with creative alternatives
Others look to me to establish reasonable standards to apply to decisions.
Tests that require thinking, not just memorization, are better for me.
It bothers me when people rely on weak arguments to defend good ideas.
Studying new things all my life would be wonderful.
Most college courses are uninteresting and not worth taking.
I look forward to learning challenging things.
The best argument for an idea is how you feel about it at the moment.
Being inquisitive is one of my strong points.
The truth always depends on your point of view.
We can never really learn the truth about most things
The best way to solve problems is to ask someone else for the answers.
Complex problems are fun to try to figure out.
Considering all the alternatives is a luxury I can‟t afford.
Advice is worth exactly what you pay for it.
Others admire my intellectual curiosity and inquisitiveness.
Open-mindedness has limits when it comes to right and wrong.
When faced with a big decision, I first seek all the information I can
Being open-minded means you don‟t know what‟s true and what‟s not.
Banks should make checking accounts a lot easier to understand.
It‟s important to me to understand what other people think about things.
Reading is something I avoid, if possible.
People say I rush into decisions too quickly.
Compulsory subjects in university waste time.
When I have to deal with something really complex, it‟s panic time.
People from another country should study our culture instead of us
always trying to understand theirs.
People think I procrastinate about making decisions.
People need reasons if they are going to disagree with another‟s opinion.
Being impartial is impossible when I‟m discussing my own opinions.
Frankly, I am trying to be less judgmental.
Frequently I find myself evaluating other people‟s arguments.
I believe what I want to believe.
It‟s just not that important to keep trying to solve difficult problems.
I shouldn‟t be forced to defend my own opinions.
It makes a lot of sense to study what people from another country think.
I look for facts that support my views, not facts that disagree.
5
5
5
5
5
5
5
4
4
4
4
4
4
4
3
3
3
3
3
3
3
2
2
2
2
3
2
2
1
1
1
1
1
1
1
0
0
0
0
0
0
0
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
5
5
5
5
5
4
4
4
4
4
4
3
3
3
3
3
3
2
2
2
2
2
2
1
1
1
1
1
1
0
0
0
0
0
0
5
5
5
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
4
4
3
3
3
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
251
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
I take pride in my ability to understand the opinions of others..
Analogies are about as useful as a sailboat on a freeway.
I really enjoy trying to figure out how things work.
Others look to me to keep working on a problem when the going gets
tough
My opinion about controversial topics depends a lot on who I talk to last.
No matter what the topic, I am eager to know more about it.
Life has taught me not to be too logical.
Things are as they appear to be.
If I have to work on a problem, I can put other things out of my mind.
Others look to me to decide when the problem is solved.
I know what I think, so why should I pretend to ponder my choice.
Powerful people determine the right answer.
It‟s impossible to know what standards to apply to most questions.
Others are entitled to their opinions, but I don‟t need to hear them.
To get people to agree with me, I would give any reason that worked.
I pretend to be logical, but I am not.
It is important to me to keep careful records of my personal finances.
I always focus on the question before I attempt to answer it.
5
5
5
5
4
4
4
4
3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
0
5
5
5
5
5
5
5
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
4
4
4
4
4
4
3
3
3
3
3
3
3
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
252
Appendix I
1. Instructor-developed Pre-instruction Essay Writing Test Scores (Experimental Group)
Rubric for Evaluating Written Argumentation (REWA)
Group
Purpose
and
Focus
Depth
of
thought
Thesis
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1.5
1.5
1.5
1.5
2
2
0
2
2
1
2
1
2
0
1.5
1
2
2
1
1.5
2
3
0
1
0
1
1
3
2
1
1.5
1
3
2
1
2
2
1
1.5
2
1
2
3
1
1.5
1.5
1.5
2
1
0
1
1
2
2
2
1.5
0
1
2
1.5
1
2
0
2
2
1
2
0
2
0
2
1
2
2
0
1.5
1
1
1.5
2
2
1
2
1
1
1.5
2
1.5
1.5
1.5
1.5
2
2
0
2
2
1
1
2
1.5
2
2
2
2
0
1.5
2
2
0
3
0
2
1
3
0
3
2
2
1
2
0
2
2
2
1
2
2
0
1
No
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Reason
ing
1.5
1
1
1
1
1
1
1
2
0
1
1
1
0
0
1
1
1
0
0
1.5
1
0
0
0
1
1.5
1
1
0
1.5
1
2
0
1.5
1
1
0
1.5
1
2
1
2
Organ.
Voice
Grammar
and
Vocabulary
Mechanics
and
Presentation
Total
Score
24 points
100%
1
2
2
2
2
2
1.5
2
2
1
2
2
3
1.5
2
1.5
3
3
0
0
2
1.5
1
0
1
3
3
1.5
2
0
3
2
2
1.5
2
3
1
1
1.5
1.5
0
1
2
1.5
1.5
1
2
1.5
2
0
1.5
2
1
1
1.5
1
2
2
1.5
0
2
0
1
2
1
2
0
1.5
1
1
1
1.5
1
1
1.5
1
2
1
2
1
1
0
1
1
1
2
2
1
1
3
1
2
2
1.5
3
2
1.5
1.5
1.5
1.5
1
1.5
1.5
1
2
0
1
1
1
0
1.5
1
2
1.5
2
3
2
3
2
2
1.5
2
1.5
1
1
2
2.5
1.5
2
1
1
2
1.5
1.5
1
1
1
1
1
1
2
1
1.5
3
1
1.5
1
0
1
1
1.5
0
1
1
1.5
2
1.5
3
1.5
1
1
1
2
1.5
1
1.5
1
1
2
1.5
0
1
11.5
11
11.5
14
12.5
13
7.5
10
15
10
11.5
12
13
8
12.5
11.5
12.5
13
5
5
13.5
13
5
7
5
12.5
11.5
14.5
12.5
11.5
14
11.5
13.5
12.5
9.5
14.5
12
9
8.5
13.5
11
7.5
14.5
47.91
45.83
47.91
58.33
52.08
54.13
31.25
41.67
62.50
41.67
47.91
50.00
54.17
33.33
52.08
47.91
52.08
54.13
20.83
20.83
56.25
54.13
20.83
29.17
20.83
52.08
47.91
60.42
52.08
47.91
58.33
47.91
56.25
52.08
39.58
60.42
50.00
37.50
35.42
56.25
45.83
31.25
60.42
253
Appendix I (cont’d)
1. Instructor-developed Pre-instruction Essay writing Test Scores (Control Group)
No
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Group
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
Purpose
and
Focus
1
1.5
2
1
1
1
2
1
3
2
2
0
2
1
1
1
1
1.5
1
2
2
1
1.5
2
1
1
2
1
2
2
2
2
2
2
1
2
0
1
3
2
2
Rubric for Evaluating Written Argumentation (REWA)
Depth
Grammar
of
Thesis Reason- Organ. Voice
and
thought
ing
Vocabulary
2
0
1
0
0
2
2
2
2
0
1.5
2
2
2
2
1
0
1
2
1
1
2
1.5
2
2
2
1
0
1
0
2
2
2
1
2
1
1
2
2
1
1
2
1.5
1
2
2
2
3
3
2
1.5
2
3
2
2
3
2
0
0
1
2
1
2
0
1
2
1
3
2
2
1
2
1
2
1.5
2
1
2
2
2
1
0
1
1.5
0
1
1.5
1
1
1
2
0
1
1
1
1
1
2
1
1
1.5
0
1
1
1.5
1
1
1
1
1
0
1
2
1
1.5
0
0
1
0
0
1
1
2
1.5
1.5
2
2
0
2
2
2
1
1.5
2
0
1.5
3
2
1
2
2
1
1.5
2
1.5
1.5
1
1.5
2
1.5
2
3
2
1
1
2
0
1.5
2
1.5
2
1.5
2
0
0
1.5
1.5
1
0
1
2
1
1
1.5
1
2
0
1.5
1
2
1
1.5
0
1
1
2
1
2
1
1
1
0
2
1
1
2
1.5
1.5
1
0
1
0
1
2
1
2
1.5
2
2
1.5
2
2
2
2
1
2
2
1
2
1
1
1
1
2.5
1
1.5
2
2
2
1.5
2
2
3
2
0
1
1
1
2
2
1
2
0
1.5
1
1.5
Mechanics
and
Presentation
Total
Score
24 points
100%
1
1
0
1
2
1
1
2
1.5
1.5
1
1
1.5
1
1
1
1.5
1
1
1
1
2
1
2
1.5
1
2
2
1
1
1
1
1.5
1
2
1
1
1
2
1
2
10.5
10
9.5
10
8
12
15
14
14.5
9
12.5
11
11
13.5
12
11
7.5
9
10
9.5
10.5
13.5
10
13
11.5
11
13.5
11
13
8
12
11
13.5
9
11.5
9
8.5
8
14
11
9.5
43.75
41.67
39.58
41.67
33.33
50.00
62.50
58.33
60.42
37.50
52.08
45.83
45.83
56.25
50.00
45.83
31.25
37.50
41.67
39.58
43.75
56.25
41.47
54.17
47.91
45.83
56.25
45.33
54.17
33.33
50.00
45.33
56.25
37.50
47.91
37.5
35.42
33.33
58.33
45.33
39.58
254
Appendix J
2. Instructor-developed Post-instruction Essay Writing Test Scores (Experimental Group)
No
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Full
name
Group
Purpose
and
Focus
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
3
3
3
3
3
3
3
3
3
3
3
3
3
2
2
1.5
3
3
3
3
2
3
2
3
3
2
2
1.5
3
1.5
1
1.5
1
3
1
2
3
1
2
2
2
3
3
Rubric for Evaluating Written Argumentation(REWA)
Depth
Grammar
of
Thesis
Reason- Organ.
Voice
and
thought
ing
Vocabulary
2
1
1.5
2.5
1.5
2
2
1.5
2
1.5
2
2.5
2
2.5
3
2.5
2
2.5
2
2.5
3
1.5
2.5
2
2
2
3
3
2
1.5
2
2
2.5
2
1.5
3
1
3
3
2
2.5
2.5
1.5
2
3
2
3
3
3
2
1
3
3
3
1.5
2
1.5
3
3
3
2
1.5
3
2
3
1.5
3
2
2
3
3
1
2
1
1.5
1
2
3
3
2
2
3
1
2
1.5
1.5
1
3
1
1
2
3
1.5
2
2
1
3
2
3
2
2
1
3
3
2.5
2.5
1
2
2
1
1.5
2
2
1.5
3
1
3
1.5
1.5
1.5
2
2
3
3
2
0
3
2.5
1.5
3
2
3
1
3
1.5
3
3
3
1.5
1.5
3
3
2
1
2
1.5
1
1.5
1.5
2
2
1.5
2
3
1.5
2.5
3
1.5
1.5
3
1.5
3
2
3
2
1.5
2
1
1
1.5
1.5
1.5
2
3
1.5
2
2
2
2
2.5
1.5
2.5
1.5
1
2
2.5
2
2
2.5
3
1
2
2
1
2
2
1
1
2
1
2
1
2
1
2
2
2.5
2.5
1.5
2.5
1
1
2
2
2.5
2
1
1.5
2
2.5
2
2
2
1
2
2
2
3
2
1
3
2
2
3
2
3
1
3
3
2
3
1.5
3
2
2.5
2
2
3
3
2
2
3
2
3
2
2
3
2.5
Mechanics
and
Presentation
2
2
3
2.5
2
3
1
2
1
2.5
2
2
1.5
1.5
1
2
1
2
2
3
2
1
2
2.5
2.5
3
2
2
2
2
2.5
3
2.5
1
2
2
2
3
2
2
1
2
2
Total
Score
24
points
17
18
16.5
17
19
19.5
16.5
17
16.5
17
18
17
19.5
16
15
17
18
18.5
16.5
19.5
17
14.5
16.5
18.5
17
16.5
18
18
16.5
13
16.5
14
16.5
16.5
17
18.5
17
18.5
17
11
16
18
16
255
100%
70.83
75.00
68.75
70.83
79.17
81.25
68.75
70.83
68.75
70.83
75.00
70.83
81.25
66.67
62.50
70.83
75.00
77.08
68.75
81.25
70.83
60.42
68.75
77.08
70.83
68.75
75.00
75.00
68.75
54.17
68.75
58.33
68.75
68.75
70.83
77.08
70.83
77.08
70.83
45.83
66.67
75.00
66.67
Appendix J (cont’d)
3.
Instructor-developed Post-instruction Essay writing Test Scores (Control Group)
Rubric for Evaluating Written Argumentation (REWA)
Depth
of
thought
Thesis
Reasoning
Organ.
Voice
Grammar
and
Vocabulary
Mechanics
and
Presentation
Total
Score
24 points
100%
Group
Purpose
and
Focus
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
3
1.5
3
3
2
3
3
3
3
1.5
1.5
3
2
3
3
3
3
3
3
2
2
3
2
1.5
3
3
3
3
3
3
3
3
2
2
3
3
1
2
3
3
3
2
1.5
2
1
1
2
2
1.5
2
0
1.5
2
1
1.5
2
1.5
2
1
1.5
1
2
1
2
1.5
2
2
1
2
1
1
2
1.5
2
1
2
1
1.5
2
1.5
1
1
3
3
2
1
2
3
1.5
2
1.5
1.5
1.5
2
1
2
2
1.5
3
1
1.5
2
2
2
2
0
2
1.5
2
2
1.5
2
3
2
2
0
2
1
2.5
2
1.5
2
2
1
1.5
1
2
1
1
1.5
1.5
1
1.5
1.5
1
1.5
1.5
2
2
1
0
1.5
0
2
1
1
1.5
1.5
1.5
1.5
1
1.5
0
2
1
1.5
0
1.5
1.5
0
1
1.5
1.5
1.5
2
3
1.5
1.5
1
1.5
1.5
1.5
2
3
1.5
3
1.5
1.5
3
2
3
2
1.5
1.5
2
3
1.5
1
1.5
3
2
2
1.5
1.5
2.5
3
2
1
2
2
1.5
1.5
1.5
2.5
0
1
0
0
1.5
2
1
1
1
2
1
1
2
1
2
1
2
2
3
2
1
2
1
2
1.5
1
2
2
2
2
1.5
2.5
1.5
2
2
1
2
1
1
2
1.5
2
2
1.5
2
1
2
2
2
2
2
1
1
1
1
2
2
1
2
2
1
1
2
2
1.5
1
2
2
2
2
2
1.5
2
3
2
1
2
2
1
2
1
2
3
1
1
1
1.5
2
1
1
0
2
1.5
1
1
1
1.5
2
2
1
1
2
1.5
2
2
1.5
1
3
2
1
2
2
2
2
2
2
1
2.5
1.5
1.5
2
2
2.5
1
15
13
12.5
12.5
13
14.5
13.5
12.5
15.5
11
10.5
15
10
15
17
15
17
13
14
10
16
15
13.5
9
16
17
14.5
16
14.5
12.5
19
17
15.5
8
16
14
10
13.5
14
16
13.5
62.50
54.17
52.08
52.08
54.17
60.42
56.25
52.08
64.58
45.83
43.75
62.50
41.67
62.50
70.83
62.50
70.83
54.17
58.33
41.67
66.67
62.50
56.25
37.50
66.67
70.83
60.42
66.67
60.42
52.08
79.17
70.83
64.58
33.33
66.67
58.33
41.67
56.25
58.33
66.67
56.25
No
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
256
Appendix K
1.
Ennis-Weir Pre-instruction Critical Thinking Essay Test Scores (Experimental Group – Grp 1)
Criteria for Evaluating Ennis-Weir Critical Thinking Essay Test
No
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Full
name
Grou
p
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Para. 1
Criteria
#1
(-1 to 3)
Para. 2
Criteria
#2
(-1 to 3)
Para. 3
Criteria
#3
(-1 to 3)
Para. 4
Criteria
#4
(-1 to 3)
Para. 5
Criteria
#5
(-1 to 3)
Para. 6
Criteria
#6
(-1 to 3)
Para. 7
Criteria
#7
(-1 to 3)
Para. 8
Criteria
#8
(-1 to 3)
Para. 9
Criteria
#9
(-1 to 5)
Total
-9 to 29
Points
100%
1
1
2
1
2
2
1
2
1
1
1
1
2
1
1
1
2
2
1
1
2
2
0
1
0
1
1
2
2
2
2
2
1
1
1
3
0
1
1
2
1
3
1
57
1
1
2
1
2
1
1
1
1
1
1
2
1
1
2
2
1
2
2
1
1
2
2
2
2
1
0
1
1
2
2
2
1
1
1
1
2
2
1
1
1
2
1
60
2
1
1
2
1
2
2
1
1
0
1
0
1
1
0
1
1
2
0
2
1
2
2
2
1
1
1
2
1
1
3
1
2
1
0
2
0
1
1
1
2
1
2
53
1
1
2
1
1
1
0
0
0
1
2
1
1
0
2
2
1
2
2
0
1
1
1
1
0
1
1
1
1
1
1
1
2
1
1
2
1
1
1
1
2
0
2
46
1
2
1
1
1
2
1
2
2
2
1
1
1
1
2
1
2
1
2
2
2
1
1
2
2
2
2
1
2
0
1
2
2
1
2
1
1
1
1
1
1
2
1
61
2
1
1
2
1
1
1
1
2
0
0
1
1
1
1
1
0
2
0
1
2
1
2
1
1
2
1
0
1
1
1
1
3
2
1
2
1
0
1
2
1
1
2
50
1
1
2
1
1
2
1
0
2
2
1
1
1
1
1
2
1
1
2
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
1
1
1
1
1
2
1
2
58
1
1
1
1
2
1
1
1
0
1
2
2
1
1
2
3
1
1
0
1
1
1
0
1
3
1
1
1
2
1
1
1
1
3
2
1
2
2
1
2
1
2
1
56
2
2
1
2
1
1
1
1
2
2
2
2
1
2
2
2
2
1
2
2
2
1
1
2
1
2
3
2
1
3
2
1
2
2
3
3
2
2
2
1
1
2
2
77
12
10
13
12
12
13
9
9
11
10
11
11
10
9
13
15
11
14
11
11
13
12
10
13
11
12
13
12
13
13
15
13
16
14
13
16
10
11
10
12
12
14
14
41.38
34.48
44.82
41.38
41.38
44.82
31.03
31.03
37.93
34.48
37.93
37.93
34.48
31.03
44.82
51.72
37.93
48.28
37.93
37.93
44.82
41.38
34.48
44.82
37.93
41.38
44.82
41.38
44.82
44.82
51.72
44.82
55.17
48.28
44.82
55.17
34.48
37.93
34.48
41.38
41.38
48.28
48.28
257
Appendix K (cont’d)
2.
Ennis-Weir Post-Instruction Critical Thinking Essay Test Scores (Experimental Group - Grp 1 )
Criteria for Evaluating Ennis-Weir Critical Thinking Essay Test
No
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Full
name
Group
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Para. 1
Criteria
#1
(-1 to 3)
Para. 2
Criteria
#2
(-1 to 3)
Para. 3
Criteria
#3
(-1 to 3)
Para. 4
Criteria
#4
(-1 to 3)
Para. 5
Criteria
#5
(-1to 3)
Para. 6
Criteria
#6
(-1 to 3)
Para. 7
Criteria
#7
(-1 to 3)
Para. 8
Criteria
#8
(-1 to 3)
3
1
3
3
3
2
3
2
1
1
1
2
2
3
3
1
3
0
3
3
3
3
3
2
3
3
2
3
3
1
3
1
3
3
3
3
3
1
3
2
3
3
1
101
1
3
1
2
2
2
1
2
3
3
3
3
3
2
1
2
2
2
2
2
1
1
3
2
2
1
1
3
2
1
3
2
2
2
2
3
1
3
1
3
1
2
2
86
2
2
2
3
2
3
3
3
2
3
3
2
1
1
3
2
2
2
1
3
3
3
3
3
2
3
2
1
1
2
2
3
3
2
3
2
2
2
1
1
1
3
1
94
2
3
1
2
1
0
1
1
3
2
2
3
3
2
2
3
3
2
2
3
2
2
2
3
3
2
1
1
3
1
1
3
3
1
1
2
3
3
1
2
3
2
2
88
3
2
3
3
3
1
1
3
3
1
2
2
3
2
3
3
3
1
1
3
3
0
3
2
2
1
2
3
3
3
3
3
3
2
1
3
2
2
3
1
1
1
0
93
2
3
2
2
1
2
2
2
1
3
2
1
3
3
2
3
2
1
2
2
2
2
2
2
1
1
2
1
2
1
2
1
2
2
1
2
2
2
1
1
2
2
2
79
3
2
2
2
2
1
2
2
3
2
2
3
2
2
2
3
1
3
2
3
1
1
2
3
2
0
2
2
1
2
2
2
2
3
2
2
3
2
1
2
2
3
3
89
1
2
3
2
2
1
3
2
1
2
2
2
2
1
2
2
3
2
3
1
2
1
2
2
2
1
2
2
1
3
2
1
2
1
2
1
2
3
2
2
1
3
0
79
Para. 9
Criteria
#9
(-1 to
5)
2
3
3
2
3
1
4
2
3
2
3
4
3
3
3
3
3
1
3
2
2
1
2
2
2
1
2
3
3
2
3
3
4
3
4
2
3
2
1
2
1
2
2
105
Total
(-9 to
29)
Points
19
21
20
21
19
13
20
19
20
19
23
22
22
19
21
22
22
14
19
23
19
14
22
21
19
13
15
19
19
16
21
19
24
19
19
20
21
20
14
16
15
21
13
258
100%
65.52
72.41
68.97
72.41
65.52
44.82
68.97
65.52
68.97
65.52
79.31
75.86
75.86
65.52
72.41
75.86
75.86
48.28
65.52
79.31
65.52
48.28
75.86
72.41
65.52
44.82
51.72
65.52
65.52
55.17
72.41
65.52
82.76
65.52
65.52
68.97
72.41
68.97
48.28
55.17
51.72
72.41
44.82
Appendix L
1. Ennis-Weir Pre-instruction Critical Thinking Essay Test Scores 2 (Control Group – Grp. 2)
Criteria for Evaluating Ennis-Weir Critical Thinking Essay Test
No
Group
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
Para. 1
Criteria
#1
(-1 to 3)
Para. 2
Criteria
#2
(-1 to 3)
Para. 3
Criteria
#3
(-1 to 3)
Para. 4
Criteria
#4
(-1 to 3)
Para. 5
Criteria
#5
(-1 to 3)
Para. 6
Criteria
#6
(-1 to 3)
Para. 7
Criteria
#7
(-1 to 3)
Para. 8
Criteria
#8
(-1 to 3)
Para. 9
Criteria
#9
(-1 to 5)
2
1
1
1
1
1
1
0
3
1
2
2
3
1
1
1
1
1
2
0
2
2
1
3
3
1
1
2
2
1
2
2
1
1
2
1
0
2
1
2
1
59
1
0
2
2
1
2
2
1
2
1
1
2
1
1
2
1
2
2
2
1
2
2
1
1
2
2
0
2
1
1
2
1
1
1
1
2
1
1
2
2
1
57
2
3
2
2
2
2
1
1
1
1
2
0
2
1
3
2
2
0
2
1
1
2
1
2
1
2
0
0
1
1
1
1
2
1
1
1
2
0
1
2
2
57
0
1
0
0
1
2
0
1
1
2
1
0
0
2
0
2
2
0
1
0
2
1
0
2
1
1
0
1
1
1
0
1
1
0
0
1
0
1
1
0
0
31
2
1
1
3
2
2
2
2
2
1
1
2
1
3
1
2
1
0
3
1
2
1
1
3
1
1
1
1
2
2
2
2
1
2
1
2
2
2
2
2
1
67
2
2
1
2
2
1
2
2
1
1
2
2
2
1
1
2
1
1
0
0
2
1
2
2
0
1
2
1
1
1
2
2
2
1
1
2
1
1
1
0
1
55
1
1
2
1
1
2
1
2
2
1
1
0
1
2
2
1
2
1
2
2
1
1
3
1
1
2
0
1
1
2
1
2
1
2
1
1
2
1
1
1
1
55
2
1
1
2
2
1
2
1
1
1
1
1
1
2
2
2
1
2
1
1
1
3
2
3
2
1
1
1
1
1
1
1
1
1
2
1
1
2
1
1
2
58
1
2
1
2
1
1
2
2
3
2
3
2
2
3
3
1
1
1
2
3
1
2
2
3
3
2
1
1
2
1
2
2
1
2
2
2
3
1
2
2
2
77
Total
-9 to 29
points
13
12
11
15
13
14
13
12
16
11
14
11
13
14
15
14
13
8
15
9
14
15
13
14
14
13
6
10
12
1
13
14
11
11
11
13
12
11
12
12
11
259
100%
44.82
41.38
37.93
51.72
44.82
48.28
44.82
41.38
55.17
37.93
48.28
37.93
44.82
48.28
51.72
48.28
44.82
27.59
51.72
31.03
48.28
51.72
44.82
48.28
48.28
44.82
20.69
34.48
41.38
37.93
44.82
48.28
37.93
37.93
37.93
44.82
41.38
37.93
41.38
41.38
37.93
Appendix L (Cont’d)
Ennis - Weir Post-instruction Critical Thinking Essay Test Scores (Control Group – Grp. 2)
Criteria for Evaluating Ennis-Weir Critical Thinking Essay Test
No
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Full
nam
e
Gro
up
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
Para. 1
Criteria
#1
(-1 to 3)
Para. 2
Criteria
#2
(-1 to
3)
Para. 3
Criteria
#3
(-1 to 3)
Para. 4
Criteria
#4
(-1 to 3)
Para. 5
Criteria
#5
(-1 to 3)
Para. 6
Criteria
#6
(-1 to 3)
Para. 7
Criteria
#7
(-1 to 3)
Para. 8
Criteria
#8
(-1 to 3)
Para. 9
Criteria
#9
( -1 to 5 )
2
1
2
2
3
2
2
2
2
1
1
2
2
3
2
2
2
3
2
3
0
3
2
1
1
0
3
2
3
3
3
3
3
3
3
2
3
2
3
3
2
89
1
2
2
1
1
2
1
1
1
2
1
2
2
2
0
2
0
3
1
1
1
2
2
2
1
2
1
3
1
2
2
1
2
1
1
1
1
2
1
1
1
58
1
2
1
2
2
2
1
0
2
2
2
1
3
2
2
1
3
1
1
2
3
2
2
1
1
2
2
1
2
1
2
2
2
1
3
2
3
2
2
2
1
72
2
2
1
2
1
1
2
1
1
1
2
1
1
1
1
1
1
0
2
0
1
1
1
1
2
1
1
1
2
1
2
1
1
2
1
1
0
1
2
1
1
49
1
1
2
2
2
2
1
1
0
2
1
1
1
2
2
3
2
2
2
1
2
2
1
2
2
2
1
2
1
2
2
3
2
1
1
1
1
1
2
2
2
66
1
2
1
1
2
2
2
2
2
1
2
2
1
2
2
0
1
1
2
1
2
1
2
2
1
2
1
2
2
2
2
1
2
2
1
2
2
1
2
2
2
66
2
1
2
2
1
1
1
2
2
2
2
2
2
1
2
3
2
2
3
2
2
1
1
2
1
2
2
1
3
2
2
3
1
1
2
3
2
2
2
1
2
75
1
1
1
2
1
1
1
2
1
2
2
1
2
1
2
0
1
1
2
1
2
2
1
2
3
2
1
2
2
2
2
2
2
1
1
3
1
2
2
2
2
65
2
3
2
1
2
3
2
1
2
2
3
2
2
1
1
2
1
1
3
2
1
2
3
2
1
1
2
2
3
3
3
3
1
1
3
2
3
2
3
2
3
84
Total
100%
-9 to 29
points
13
15
14
13
16
16
13
12
13
15
16
14
16
15
14
14
13
14
18
13
14
16
15
15
13
14
14
16
19
18
20
19
16
13
16
17
16
15
19
16
16
260
44.82
51.72
48.28
44.82
55.17
55.17
44.82
41.38
44.82
51.72
55.17
48.28
55.17
51.72
48.28
48.28
44.82
48.28
62.07
44.82
48.28
55.17
51.72
51.72
44.82
48.28
48.28
55.17
65.52
62.07
68.97
65.52
55.17
44.82
55.17
58.62
55.17
51.72
65.52
55.17
55.17
Appendix M
Pre- instruction Test Results of Component CT Performance Assessment (EG)
No
Group
The CTST Scale/Overall Scores
Interpretation.
Analysis.
Evaluation.
Inference.
Explanation.
Total
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
36
42
40
38
40
30
52
32
56
46
44
44
52
40
28
36
44
40
40
48
50
44
38
56
46
40
46
46
46
32
56
40
24
36
52
46
32
46
38
44
24
32
40
44
52
40
36
29
44
52
42
46
48
44
52
44
48
44
44
36
40
52
48
46
44
48
40
46
46
48
36
44
46
45
52
42
38
32
46
38
42
50
38
46
40
44
48
42
32
44
35
38
41
44
50
38
32
46
28
42
52
40
44
40
38
50
38
52
38
46
44
32
36
40
44
46
52
46
48
36
50
38
44
40
48
52
44
40
38
194
212
198
202
206
208
243
186
242
228
196
248
178
220
212
200
228
196
234
260
222
232
212
222
220
210
224
210
216
220
235
215
194
198
232
210
204
214
216
212
210
210
204
Full name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
22
44
44
42
60
64
46
36
46
48
32
52
32
46
38
40
52
30
52
60
56
40
42
42
40
44
46
34
42
52
40
34
32
48
46
32
44
46
44
32
50
46
36
44
32
42
42
42
32
52
32
44
48
44
54
32
44
50
40
52
46
52
54
32
52
46
38
44
48
48
52
40
44
42
43
48
40
52
48
46
40
46
46
46
52
46
261
Appendix M (Cont’d)
Post- instruction Test Results of Component CT Performance Assessment (EG)
No
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Grou
p
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
The critical thinking skills test scale/ overall scores
Interpretation
Analysis
Evaluation
Inference
Explanation
Total
40
76
38
40
46
78
40
42
76
40
46
44
42
38
48
46
44
46
48
46
72
58
48
47
52
48
48
40
35
46
46
38
42
46
52
44
52
64
62
44
64
46
56
48
40
52
44
52
72
48
34
46
46
44
66
48
58
42
40
55
60
56
60
48
52
62
46
48
32
46
50
69
46
58
56
48
50
62
50
58
64
48
62
52
68
48
242
313
246
238
256
342
208
185
316
202
228
266
250
242
240
242
241
286
278
278
310
266
258
264
262
224
242
272
221
220
254
251
246
242
283
238
280
254
280
250
294
286
264
68
74
64
48
58
76
36
36
76
38
44
52
62
48
48
46
42
64
58
60
76
56
52
70
48
44
52
68
42
42
50
44
54
48
54
50
54
38
54
32
60
50
60
42
81
52
60
52
72
42
32
74
40
50
48
46
48
56
58
52
56
68
64
78
52
50
69
58
48
48
54
35
40
58
61
48
50
55
48
56
46
56
56
64
62
50
44
42
40
46
48
44
42
41
44
38
44
56
52
50
48
52
48
60
48
48
36
48
46
32
56
52
48
60
40
46
42
52
54
48
60
46
60
42
60
56
54
60
50
262
Appendix N
Pre-instruction Test Results of Component CT Assessment (Control Group)
No
The CTST Scale/Overall Scores
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Group
Interpretation
Analysis
Evaluation
Inference
Explanation
Total
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
44
46
46
48
44
42
46
48
44
42
42
56
36
44
46
28
36
46
42
34
48
52
32
36
34
44
38
40
46
34
48
34
38
46
52
40
48
38
48
52
44
46
38
36
32
40
48
44
46
50
46
34
46
44
44
40
52
44
46
36
52
48
56
34
32
46
54
48
36
36
42
36
56
46
44
30
40
32
46
52
44
36
38
46
46
41
42
48
48
28
48
40
44
34
66
46
48
40
46
48
46
44
34
46
44
46
48
38
40
42
46
40
38
46
36
32
48
46
36
39
48
54
55
48
50
46
52
40
38
46
36
40
44
48
40
44
38
49
51
49
46
34
46
44
52
46
44
40
37
44
48
40
36
48
40
50
46
52
38
42
46
40
46
39
48
46
46
48
48
50
46
44
36
46
40
46
40
42
54
40
54
44
52
38
36
38
40
44
44
32
46
54
38
38
38
52
38
38
36
36
52
36
48
45
46
224
226
220
221
214
226
230
202
218
218
208
222
230
214
237
211
229
230
210
214
210
244
196
202
212
205
216
220
206
190
208
228
208
206
218
200
210
205
236
241
220
263
Appendix N (Cont’d)
Post-instruction Test Results of Component CT Performance Assessment (Control Group)
No
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Full
name
Group
The CTST Scale/ Overall Scores
Interpretation Analysis Evaluation
Inference
Explanation
Total
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
40
47
52
49
47
42
46
51
53
44
51
47
52
42
46
52
50
39
35
42
53
39
52
32
46
56
53
42
48
38
37
38
42
44
40
32
42
48
27
42
53
47
32
54
43
52
49
47
38
31
48
41
33
46
38
49
50
52
51
42
36
42
54
32
44
41
36
45
41
48
32
54
57
45
40
45
49
43
45
42
43
44
230
207
239
216
233
219
247
243
229
228
239
214
197
203
224
239
238
213
229
206
250
245
228
209
209
229
240
217
236
184
212
248
209
225
227
224
213
257
206
224
234
52
42
36
52
40
48
52
56
60
48
54
56
30
38
30
44
44
40
54
32
48
45
46
40
32
42
54
40
40
36
32
56
36
40
54
58
40
62
50
52
46
54
34
41
40
46
33
51
49
39
46
42
36
33
47
53
36
45
43
52
53
49
56
40
50
44
48
46
55
42
44
46
55
49
52
47
45
49
40
51
49
43
37
52
56
32
48
47
51
49
46
42
51
42
36
38
46
57
47
40
46
43
58
51
58
43
46
47
42
39
58
34
43
42
37
49
41
40
39
62
36
38
48
264
Appendix O
Pre- instruction Test Results of CTD Assessment (Experimental Group)
No
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Grou
p
T-S
(12 items)
The seven CCTDI Dispositional Scales/Aspects
O-M
Analy.
Syste.
CTSC.
(10 items)
(11 items)
(12 items) (9 items)
CT Inq.
(11 items)
C-M
(10 items)
T0TAL
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
38
42
48
48
51
43
46
48
53
22
52
53
42
46
34
44
23
26
46
41
47
52
56
46
20
42
46
49
47
28
57
46
46
43
52
27
38
47
36
53
56
41
37
38
34
44
42
48
44
48
36
36
48
14
42
42
48
28
46
42
44
45
40
46
36
32
30
38
44
42
48
42
42
50
44
44
48
44
28
24
38
24
32
40
40
36
47
37
44
43
46
46
42
34
45
31
19
29
39
42
35
43
44
44
48
19
29
35
51
34
39
43
39
44
53
36
57
44
48
35
48
28
53
42
24
20
47
44
38
36
38
39
36
44
42
44
44
48
28
18
36
40
36
44
44
30
46
40
48
40
46
38
42
44
40
48
40
36
42
36
40
38
44
36
34
48
48
36
48
38
44
46
275.00
250.00
294.00
294.00
303.00
315.00
313.00
294.00
324.00
251.00
200.00
250.00
299.00
285.00
236.00
322.00
236.00
289.00
313.00
240.00
270.00
295.00
331.00
284.00
287.00
303.00
319.00
315.00
291.00
249.00
314.00
301.00
309.00
290.00
292.00
214.00
280.00
305.00
231.00
267.00
295.00
285.00
284.00
42
33
37
43
49
49
44
49
48
53
26
33
42
45
36
45
50
42
45
30
35
47
46
38
45
43
48
46
32
26
25
45
43
36
38
35
32
36
36
42
42
29
50
43
32
42
43
42
49
47
44
49
28
53
37
49
36
19
46
19
44
51
38
41
42
55
46
60
42
48
42
42
47
42
42
47
34
37
40
42
46
32
30
37
42
36
31
34
40
39
23
42
42
39
45
41
18
20
45
32
40
44
28
43
38
24
32
37
53
48
41
49
48
46
39
28
47
40
43
50
37
22
43
48
43
42
35
45
41
265
Appendix O (cont’d)
Post - instruction Test Results of CTD Assessment (Experimental Group)
No
T-S
(12 items)
O-M
(10 items)
Analy.
(11items)
Syste.
(12items)
CTSC.
(9 items )
CT Inq.
(11 items)
C-M
(10items)
T0TAL
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
43
32
26
48
43
43
46
48
53
42
42
53
41
46
38
47
43
46
43
47
53
53
47
38
41
39
55
33
55
34
55
47
46
53
52
37
53
37
46
43
32
48
41
44
44
34
32
42
37
38
46
34
46
38
44
43
49
43
49
42
44
48
48
46
46
42
40
45
46
45
39
39
42
40
44
45
42
49
48
44
43
28
41
35
43
39
36
33
40
39
44
41
42
49
45
50
46
41
38
45
39
43
50
42
53
37
39
45
46
30
33
43
37
34
42
39
45
49
53
49
48
45
39
41
43
47
42
34
45
48
42
53
43
52
43
53
44
56
48
53
47
49
56
42
53
49
53
51
47
53
49
49
43
53
42
44
52
49
49
53
47
51
54
51
43
40
40
49
44
50
46
43
42
40
42
43
43
42
36
39
41
41
38
40
40
42
40
44
44
39
40
38
43
43
45
44
44
42
43
41
44
40
40
41
40
40
43
42
43
37
42
43
39
39
37
39
40
37
43
51
47
52
44
53
50
34
37
40
37
45
49
50
41
45
44
43
41
53
39
39
39
50
49
50
49
53
43
48
47
48
47
50
46
44
52
44
47
43
40
34
42
36
48
30
42
36
48
43
38
43
44
36
45
43
47
42
43
37
40
41
43
46
48
43
36
33
39
43
48
41
43
44
46
41
39
33
49
35
43
37
39
292.00
265.00
274.00
284.00
323.00
283.00
309.00
306.00
330.00
320.00
289.00
305.00
295.00
311.00
292.00
328.00
325.00
307.00
323.00
298.00
317.00
318.00
325.00
280.00
303.00
294.00
310.00
281.00
318.00
296.00
334.00
312.00
326.00
329.00
337.00
303.00
308.00
277.00
301.00
205.00
285.00
294.00
287.00
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
The seven critical thinking dispositions
Group
266
Appendix P
Pre-instruction Test Results of CTD Assessment (Control Group)
Gro
up
T-S
(12 items)
The seven CCTDI Dispositional Scales/Aspects
O-M
Analy.
Syste.
CTSC.
(10 items)
(11 items)
(12 items)
(9 items)
CT Inq.
(11 items)
C-M
(10 items)
T0TAL
1
2
3
2
2
2
52
47
42
46
42
36
46
44
51
47
42
56
43
42
48
47
37
45
46
48
40
327.00
302.00
318.00
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
47
37
32
46
32
53
48
31
27
52
42
26
42
47
36
35
48
48
26
28
42
46
36
43
42
48
38
47
48
42
44
40
32
42
48
47
42
43
42
40
48
42
26
40
48
24
36
40
48
45
44
44
40
44
42
38
45
46
40
42
42
44
40
40
46
32
36
36
40
44
48
40
42
40
42
46
40
26
33
41
29
39
46
42
46
45
47
38
46
42
40
42
43
39
23
50
36
44
38
46
45
42
44
36
35
39
42
37
47
39
30
37
33
53
42
38
47
41
52
36
42
41
42
46
38
36
47
37
42
46
43
38
41
50
43
36
47
42
49
38
44
43
42
37
39
41
50
32
42
36
38
43
43
50
34
47
18
41
38
41
23
44
38
39
40
42
41
29
46
38
24
40
34
41
36
35
41
28
42
34
32
45
37
45
49
40
35
32
18
44
53
23
44
37
36
35
39
49
34
39
45
37
43
47
44
21
44
37
32
29
38
42
42
37
42
38
45
36
28
35
48
54
42
47
45
39
43
35
48
42
44
46
30
48
40
42
22
40
45
48
46
40
42
20
38
42
30
32
32
38
48
34
44
42
42
40
44
32
36
44
44
46
42
40
44
48
315.00
256.00
282.00
300.00
223.00
292.00
301.00
270.00
230.00
306.00
303.00
269.00
308.00
299.00
285.00
237.00
304.00
280.00
221.00
275.00
265.00
289.00
289.00
281.00
303.00
276.00
301.00
268.00
265.00
266.00
286.00
305.00
312.00
286.00
284.00
271.00
260.00
312.00
No
Full
name
267
Appendix P (Cont’d)
Post-instruction Test Results of CTD Assessment (Control Group)
No
Full
name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Grou
p
T-S
(12 items)
The seven CCTDI Dispositional Scales/Aspects
O-M
Analy.
Syste.
CTSC.
(10 items)
(11 items)
(12 items)
(9 items)
CT Inq.
(11 items)
C-M
(10 items)
T0TAL
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
46
47
42
47
37
32
36
52
43
44
31
47
42
42
46
32
50
36
35
42
43
39
52
32
46
46
53
42
48
48
37
38
42
44
40
42
42
48
47
42
43
42
32
36
42
40
38
42
46
40
38
34
36
30
38
30
44
44
40
44
32
48
42
46
40
42
42
44
40
40
46
42
46
46
40
44
48
40
42
40
42
46
37
37
45
43
33
44
47
36
45
39
49
44
49
45
37
43
40
44
41
44
47
42
37
48
42
42
47
32
38
35
36
38
35
44
34
42
37
42
39
53
39
31
42
40
48
42
44
36
40
38
40
42
42
40
43
48
36
40
42
40
48
42
40
37
42
48
48
44
44
32
32
40
44
42
40
44
34
46
42
40
44
49
274.00
276.00
279.00
305.00
276.00
281.00
280.00
293.00
290.00
287.00
270.00
280.00
276.00
281.00
290.00
258.00
299.00
268.00
288.00
278.00
309.00
294.00
302.00
275.00
309.00
289.00
321.00
283.00
276.00
271.00
278.00
275.00
276.00
299.00
265.00
292.00
266.00
309.00
295.00
310.00
312.00
34
34
41
40
36
43
41
39
39
46
42
36
43
37
43
36
39
30
42
33
39
36
40
26
44
28
46
35
42
34
36
35
39
42
37
47
39
40
51
43
43
47
42
36
42
48
47
41
42
46
42
41
42
36
38
46
37
47
40
46
43
48
51
48
43
46
47
42
49
48
44
43
42
37
49
41
40
32
52
46
48
48
37
42
39
43
40
34
37
38
39
38
31
33
36
38
40
30
39
36
40
36
42
44
42
44
41
36
45
41
28
32
44
32
35
40
25
39
30
43
32
38
44
268
Appendix Q
A Summary of Critical Thinking Classroom Activities, Assignments, Assigned
source Readings, and Instructional Techniques used throughout the course
material for the Experimental Group
The only capacity we can use to learn is „human thinking‟. If we think well while learning, we
learn well. If we think poorly while learning, we learn poorly. To learn a body of content, say, an
academic discipline, is equivalent to learning to think within the discipline (academic writing, in
this case) and at every level from real-life situations. Through critical thinking, then, we are able
to acquire knowledge, understanding, insights, and skills in any given body of content. To learn
content, we must think analytically and evaluatively within that content. Thus, critical
thinking provides tools for both internalizing content (taking ownership of content) and assessing
the quality of that internalization. It enables us to construct the system (that underlies the content)
in our minds, to internalize it, and to use it reasoning through actual problems and issues. Thus,
the main purpose of these critical thinking activities, assignments and instructional techniques is
to help you:
1. Learn explicitly the skills of critical thinking
2. Develop the skills/abilities needed to think critically within the contents of academic
writing skills (by analyzing, evaluating, interpreting, and synthesizing and/or integrating
information from different sources and constructing and arguing a case to explain the
evidence. Considering various points of view fairly and identifying assumptions
accurately(in analyzing academic documents(readings) such as term papers, essays etc.
writing academic essays/papers),
3. use and/or apply these same abilities in thinking for everyday reasoning tasks(to increase
the likelihood of skill transfer to other complex problems in other disciplines at school and
everyday life situations in the local community(for example, analyzing newspaper editorials
and campaign speeches, critiquining the logic of political debates, analyzing the credibility
of information related to political issues.
269
4. Develop dispositions (positive attitudes) toward thinking critically and become successful
users of your critical thinking abilities/skills in writing academic essays/papers for studies
in your disciplines
5. Develop the EFL written proficiency expected in general education courses and studies in
the disciplines, and express your ideas clearly and logically in writing
A Summary of Critical Thinking Activities and Assignments used throughout
the Course Material for the Experimental Group
The following critical thinking activities and assignments (Tables 1 and 2) were adapted from the
information reviewed in the literature and designed accordingly to be used as the basis in the
course material for the experimental treatment in this study are summarized below.
Table 1. Critical Thinking Activities used throughout the Course Material
Type of Activity
Description
Target CT Skills
Practiced
Skill worksheets
At the start of each CT section, skill worksheets such as CT Packet,
self-and-peer editing, the general essay checklist, the CT toolbox,
All Six CT Skills
models for critical thinking, and key questions were handed out to
help students engage in the skill during the CT and writing
activities/assignments.
To help the transfer of critical thinking skills to other media, some
class time was devoted to analyzing and critiquing several Interpretation,
Photographs or
photographs or images contained in the course material for Analysis, and
images
experimental treatment. The instructor also modeled critiques of Evaluation
picture/photographs/video segments from books, news, and
popular documentaries
. Reviewing the . Students were assigned to read an article, op-ed piece, or Interpretation,
and
Reviewer
book/movie review. The purpose was to help students assess and Analysis,
identify the article writer’s position conveyed in citations of the Evaluation
.Analyzing
opinions and arguments of others, agreement/disagreement with
these opinions/arguments.
. Modeling article . Using an article assigned for class, the instructor verbally reviewed
critiques
the process of critiquing the article, citing specific examples of bias,
assumptions, inaccurate information, etc. A copy of the article with
the teacher’s critiques was given to students to review.
Usually in small groups, students practice critiquing short Interpretation,
essays/articles given in class related to the topic, and present their Analysis,
. Student essay or .
article critiques
270
findings to the class. To incorporate inference, students describe
their team’s ‘solution’ to the issue.
. Analyzing and . Students were presented with data in graphs, charts, table, or text
Explaining Data
form. Students were asked to discuss, describe and explain the
data in pairs before students began writing
At the start of the semester, the teacher introduced himself to
. Modeling Self- students by describing his own experiences as sources of bias and
perspective. At the end of his section, the teacher also summarized
Regulation
his own thought processes on the issue.
Evaluation,
Explanation, and
Inference
Adapted from each essay mode or article, students adjust
Student
Self- themselves to different contexts, conditions, and/or environments
to represent (either graphically or physically) their opinions about
Regulation
the section issue by lining up in between two ‘opposite ends’ of the
spectrum. In groups, students then explain their position and the
information that they draw from the article to frame that opinion.
Students were also asked to model the teacher’s introductions at
the start of each CT section by describing their own values and
biases relating to the topic.
The teacher set aside a ‘Question and Answer’ periods for students.
Engaging
in If a prompt is needed, each student jdentifies and categorizes the
Discussion
strengths and weaknesses of the assigned readings. Each student
says “1 strength/1 weakness” about the assigned readings.
Self-Regulation,
Self-Regulation,
Explanation
and
Explanation
Interpretation,
Analysis,
Evaluation,
Inference, and
Explanation.
Critical Thinking Assignments
According to Hofreiter (2005), writing assignments were the primary vehicles for students to
practice thinking critically. These utilized real-world scenarios asked students to synthesize the
information they had learned in the section and use it to demonstrate the critical thinking skills
they learned; while only one skill was highlighted during course activities, multiple skills were
required to complete most assignments. Examples of the assignments and target skills practiced
given below (Table 3.2) were adapted from (Hofreiter, 2005; Sims, 2009; and Beaumont, 2010)
and designed for use in this study.
271
Table 2. Critical Thinking Assignments used throughout the course Material
Assignment Title
Description
Target CT Skills
Practiced
Students applied their self-regulation skill to research, read,
Inquiring further tasks and/or write their own essays, perhaps similar to the material
on the issue taught
discussed in the classroom, or interview a person who might
have similar experience to the kind of person in the story.
Then, share that experience by telling and/or by describing the
class how this experience affected their own life and others’.
Interpretation,
Analysis,
Evaluation,
Inference,
Explanation, and
Self-Regulation,
Students identify a certain problem on a particular area of
Making decisions tasks campus (college/department) based on their personal
to solving the problem perspectives, synthesized various readings (and opinions) on
of community.
the issue and communicate both in writing and orally trying to
persuade the college/department to recognize the problem
and remedy it. Students also made recommendations as a
college researcher for the college’s next steps in managing
these issues.
Interpretation,
Analysis,
Evaluation,
Inference, and
Explanation
Analyzing, Evaluating,
and discovering the
tools employed in a
model
paper
to
develop
similar
academic paper.
Students’ Letter to the
Author of an article
(Tourism Development
in Ethiopia).
This asked students to read, analyze, evaluate and discover the
writing tools the author employed in his/her writing of a term
paper in the material for experimental treatment “Hate
Speech on College Campuses”. Students were assigned to
develop their term paper on the similar issue in the context of
Ethiopian university campuses or in the society in which they
are parts now for submission. This helps students present their
own interpretation or evaluation or argument on a given
academic topic and increase their expertise in that particular
area of the course as good academic writers. Students also
required some kind of self-and-peer critiques on each other’s
work. The teacher, then, selected 4 different papers and read
them to the class, asking the class to evaluate them.
Students were assigned to read the article (Tourism
Development in Ethiopia - by Tony Hickey), analyze and
evaluate the thinking (argument) shown. Students wrote and
mailed a letter to the author of the article in response to each
of the paragraphs and the article as a whole. Students identify
whether the argument in the story is credible and valid based
on the logic and evidence given. This asked students to
summarize the main arguments presented, assumptions,
biases, and information supporting the main point of view, and
also address the country’s role in tourism development
compared to many other countries.
Interpretation,
Analysis,
Evaluation,
Synthesis,
Explanation, and
Self-regulation
Interpretation,
Analysis,
Evaluation,
Inference,
and
Explanation
Students were asked to read and critique two articles on a Interpretation,
Critiquing two articles similar topic, particularly on some “hot” issue that is discussed Analysis,
on a similar topic
in print media or Internet news sites. The purpose is to help Evaluation,
272
students read the articles, identify the authors’ theses and Explanation, and
main points and supports, and underline them to build a Self-Regulation
paraphrase of the authors’ main points. And then to decide
which of the authors’ points are valid and which are not.
Students summarized each article and rated them for
credibility.
A typical class period included lecture (for no more than 10-15 minutes for a 50 minute class
period) and some kind of student activity, typically a class discussion of assigned source readings
or one of the activities described below. In addition to discussion focused on reading documents
from different sources(textbooks and additional academic writing documents), the experimental
group was given other assignments and regular classroom activities that required the use of
higher order thinking skills and academic writing skills. The main examples of these activities are
summarized as follows:
1. Students were challenged to see reading as a process and encouraged to reead the text
more than once and as they do so, to question the text to determine the author‘s argument,
the text‘s stylestic choices and structure. Students were also assigned to learn annotating
(skills crucial to making meaning from a text) by taking critical or explanatory notes,
comment upon in notes, summarizing, and descriptive outlining.
2. The instructor scaffoldded specific thinking strategies instruction, beginning with basic
questioning strategies, then build to develop the abilities to inference, as well as
analyzing, synthesizing, and evaluating skills. Students were, then, assigned to read and to
answer instructor prepared question (see Numrich‘s Sequence, App.B) on the topic given.
The instructor facilitated a Socratic discussion on the purposes listed in the reading
material in the experimental section (List, define, and briefly interpret) each of the
purposes of the topic and author‘s as listed in the reading documents while discussion in
control group/section focused on questions requiring definitions, factual or general
information.
3. Students were assigned to read and to verbally analize a written text (or part of a text, for
example, passage, paragraph, etc.) in small groups to compare and understand the
273
constructed nature of lived and text worlds, and to critique the message they forward.
While reading a text, the instructor asked his students to evaluate and identify all images
and concepts present in the assigned reading, and after this is mastered, students were
challenged to move from verbal analysis to written analysis communicating tangibles and
intangibles present in the work.
4. Students in each section also participated in a structured controversy in which each
student read a set of documents on academic papers/essays, take a position favoring or
opposing the topic based on questions provided by the instructor, prepared to support
his/her position using general information found in the assigned readings, argued his or
her position within groups of four/five students in class, and finally switched positions in
order to understand better the entire controversy. Each group of four to five students then
attempted to reach consensus based on evidence and strength of arguments and wrote a
group position paper. This activity was handeled in the same way for each group except
that experimental sections were reminded to use the critical thinking skills in the
instructional model (Paul‘s model) in analyzing and interpreting documents, preparing
their arguments, and writing the position paper. The instructor gave each student a copy
of argumentative essay critique form to assess each portion of their assignment. In
experimental section, the instructor explicitly related the criteria to Paul‘s Universal
Intellectual Standards (Clarity, Accuracy, Precision, Relevance, Depth, Breadth, Logic)
which must be applied to thinking whenever one is interested in checking the quality of
reasoning about a problem, issue, or situation.
5. Students in each section were assigned to write an essay. Students received packets
containing an essay question on the source readinings and general information in
preparing their essays. Before turning in their essays, students in all sections had the
opportunity to read and to evaluate another student‘s essay using the instructor‘s grading
criteria and to have their own essay evaluated by a peer. Experimental group students
were reminded to use the critical thinking toolbox (model) in analyzing and interpreting
documents, and grading standards were explicitly related to intellectual standards found
in Paul‘s model(Clarity, Accuracy, Relevance, Depth, etc.).
274
6. A sixth type of activity that encouraged critical thinking required students to analyze and
to critique newspaper editorials and campaign speeches. For example, students were
assigned to read and to critique the political, economic, social, and cultural characteristics
of events in the topic (for instance, critiquining the logic of political debates, analyzing the
credibility of information related to this political issues.
The six activities described above all relate to thinking within the contents of assigned source
readings. Analyzing these source readings, exploring and interpreting multiple causation, and
characterizing an event by examining its elements of reasoning, and developing and evaluating
an argument supported by evidence from source readings were all typical activities used in the
course in order to develop in learners the ability to think critically and write academic essays
thoughtfully.
Following is an illustrative lesson plan for a discussion of assigned reading from a unit on
sample student argumentative essay writing. Students were assigned to read document 1(see
Materials and Methods above). Students in experimental section were told to complete the
‗reasoning‘ form for the sample student argument essay 1 (Danger Around Every Corner)
reading.
Activity Script:
1. Students in small groups…….10 minutes
Each group read and discussed the topic (Danger around Every Corner) analyzed by various
group participants, and the point of view and credibility of the author‟s information for this
document (sample student essay).
2. Class discussion of source reading….. 20 minutes
Students were called on to provide the various points of view (elements) shown in the letter
(student essay). Students were also asked to point out examples and emotive language (reasoning
fallacies) when they found them. Varied purposes of the writter, his/her assumptions about other
people‟s experiences, inferences he/she made about the causes of „car accidents„, and the limits of
personal experience as a source of data were discussed. The impact of this essay on people‟s
thinking was highlighted. Some class time was also devoted to analyzing several photographs of
275
different car accidents contained in the textbook (module). Individually or in groups, students
were also asked to critique the analysis, syntheses, or evaluations of others.
3. Summation….. 5 minutes
The instructor guided students to recognize how this student essay about „Danger Around Every
Corner‟ inform our understanding of „car accidents‟ and their effects on our political, economic,
social, and cultural lifes in the country we are living.
Control group
Following is an illustrative lesson plan for the control section for a discussion of assigned reading
from a unit on sample student argument essay writing section 1(Danger Around Every Corner).
Students were assigned to read the sample essay written by a sudent. Students in the control
section were also assigned to answer questions at the end of the essay.
Activity Script:
1. Student in small groups….. 10 minutes
In small groups students went over to answers to questions assigned from the end of the essay.
2. Class discussion of source reading…… 20 minutes
Students were called on to answer assigned questions and other questions that arose from student
comments, supported by relevant references from the reading text (Danger Around Every Corner)
and almost all class time was devoted to reading an essay and answering to factual questions and
definitions.
3. Summation…… 5 minutes
The instructor guided students to recognize how this student essay about ‗Danger Around Every
Corner‘ inform our understanding of several ‗car accidents‘ and their impacts on our political,
economic, social, and cultural lifes in the country.
276
277
Table of Contents
Page
ACKNOWLEDGEMENTS ............................................................................................... i
TABLE OF CONTENTS ....................................................................................................ii
LIST OF TABLES ...............................................................................................................viii
LIST OF FIGURES .............................................................................................................x
ABSTRACT ..........................................................................................................................xi
CHAPTER I: Introduction
1.1. Background of the Study ................................................................................................1
1.2. Statement of the Problem ................................................................................................6
1.3. Objectives of the Study ...................................................................................................11
1.4. Significance of the Study ................................................................................................13
1.5. Operational Definitions of Terms ...................................................................................14
1.6. Scope of the Study………………………………………………………………………15
1.7. Limitations of the Study..................................................................................................16
CHAPTER II: Review of Related Literature
2.0. Introduction .............................................................................................................................17
2.1. Critical Thinking .....................................................................................................................17
2.1.1. Defining Critical Thinking ...........................................................................................17
2.1.1.1. The Philosophical Approach ............................................................................18
2.1.1.2. The Cognitive Psychological Approach ..........................................................21
2.1.1.3. The Educational Approach ..............................................................................23
2.1.1.4. Attempts at Consensus: the APA Delphi Study Definition .............................23
2.1.1.5. Critical Thinking Definitions Adopted for This Study ....................................25
2.2. Can Critical Thinking be Taught? ..........................................................................................27
2.3. The Importance of Learning to Think Critically.................................................................... 28
2.4. Theoretical Perspective Underpinning the Teaching of Critical Thinking .............................32
2.5. The Need for Explicit Instruction in Critical Thinking ..........................................................34
2.6. Approaches to Teaching Critical Thinking .............................................................................36
ii
2.6.1. The General Approach ............................................................................................... 37
2.6.2. The Infusion Approach ................................................................................................37
2.6.3. The Immersion Approach ............................................................................................38
2.6.4. The Mixed Approach ...................................................................................................38
2.7. Critical Thinking Teaching Techniques/Instructional Strategies ...........................................41
2.8. Attributes a Critical Thinker Needs to have ...........................................................................46
2.8.1. Critical Thinking Skills ................................................................................................46
2.8.2. Critical Thinking Dispositions .....................................................................................49
2.9. Teaching Critical Thinking for Transfer across Domains .....................................................50
2.9.1. Dispositions for Effortful Thinking and Learning .......................................................51
2.9.2. A Skills Approach to Critical Thinking .......................................................................53
2.9.3. Structure Training to Promote Transfer .......................................................................56
2.9.4. Metacognitive Monitoring ...........................................................................................58
2.10. The Relationship of Critical Thinking to Other concepts .....................................................58
2.10.1. Metacognition ...........................................................................................................59
2.10.2. Motivation .................................................................................................................60
2.10.3. Creativity...................................................................................................................61
2.11. Critical Thinking Assessment ...............................................................................................61
2.11.1. Purposes of Critical Thinking Assessment ...............................................................61
2.11.2. Approaches to Critical Thinking Assessment ...........................................................62
2.11.2.1. Commercially Available Critical Thinking Tests ......................................63
2.11.2.2. Alternatives to Commercial Instruments ...................................................65
2.11.3. Measuring Critical Thinking: Some Considerations ................................................66
2.11.4. Measuring Critical Thinking Dispositions ................................................................68
2.11.4.1. Techniques of Evaluating Critical Thinking Dispositions .........................70
2.11.4.1.1 Direct Observation ....................................................................71
2.11.4.1.2. Rating Scales ............................................................................72
2.11.4.1.3. Learner Self-Assessment .........................................................72
2.11.4.1.3.1. Surveys/ Questionnaires......................................72
iii
2.11.4.1.3.2. Reflective Learning Logs ........................................73
2.11.4.1.4. Essay Tests .................................................................................74
2.12. Academic EFL Writing Skills ...............................................................................................75
2.12.1. Teaching Academic ESL/EFL Writing Skills............................................................77
2.12.2. The Importance of Teaching Academic Writing in the University ...........................79
2.12.3. Most Important Characteristics of Academic Writing ...............................................80
2.12.4. Most Common Student Written Academic Assignments and Tasks .........................82
2.12.4.1. Major Writing Assignments ........................................................................82
2.12.4.2. Medium-Length Essays and Short Written Tasks ......................................83
2.12.4.3. English Composition Writing Tasks ...........................................................83
2.12.5. Essential Features of Academic Text and Importance of Teaching Them ................84
2.12.5.1. Features of Academic Genre and Text.........................................................84
2 .12.5.2. Teaching Academic Text Features..............................................................85
2.12.6. Types of Academic Writing Tasks in Commonly Required Academic Courses ...... 88
2.12.6.1. Most Common Types of Academic Writing Tasks .....................................88
2.12.6.2. Less Common Writing Tasks ......................................................................89
2.12.6.3. Less Common Rhetorical and Writing Tasks ..............................................90
2.13. Critical Thinking and L2/Foreign Language Learning .........................................................91
2.13.1. Critical Thinking and Academic EFL/ESL Writing .................................................92
2.13.2. The Relationship of Critical Thinking to Creative Thinking and Thoughtful
Writing ....................................................................................................................94
2.13.3. Teaching Critical Thinking in Academic EFL Writing Classes ...............................96
2.13.3.1. Techniques in Using Critical Thinking Questioning ................................98
2.13.3.2. Techniques in Using Critical Reading ........................................................ 105
2.13.3.3. Techniques in Using Critical Thinking Skills Tasks ................................106
Summary of Literature Review .......................................................................... 109
iv
CHAPTER III: The Research Design and Methodology
3.0. Introduction ...........................................................................................................................110
3.1. The Research Design ............................................................................................................110
3.1.1. Quasi-experimentation ..............................................................................................110
3.1.2. The Study Variables ...................................................................................................115
3.2. The Research Paradigm ........................................................................................................115
3.3. Institutional Setting ...............................................................................................................118
3.4. The Research Participants .....................................................................................................118
3.5. The outcome Measures .........................................................................................................119
3.5.1. The Instructor/Researcher-developed Essay Test ......................................................119
3.5.2. The Ennis-Weir Critical Thinking Essay Test ............................................................122
3.5.2.1. General Critical Thinking Performance Assessment .....................................122
3.5.2.2. Component Critical Thinking Performance Assessment ...............................127
3.5.3. The California Critical Thinking Dispositions Inventory (CCTDI) ...........................130
3.5.3.1. The Seven CCTDI Dispositional Scales ........................................................131
3.5.3.2. The Validity and Reliability of the CCTDI ...................................................132
3.6. The Rating Scale for Test Instruments..................................................................................135
3.6.1. The Instructor-developed Test ....................................................................................135
3.6.2. The Ennis-Weir Critical Thinking Essay Test ............................................................135
3.6.3. The California Critical Thinking Dispositions Inventory (CCTDI) ...........................136
3.7. The Research Hypothesis ......................................................................................................136
3.8. The Experimental Procedure.................................................................................................137
3.9. Instructional Method and Materials ......................................................................................139
3.9.1. Experimental Group ....................................................................................................139
3.9.2. The Control Group ......................................................................................................145
3.10. Method of Data Analysis ....................................................................................................147
3.11. A Short Summary of the Pilot Study ..................................................................................149
Summary of the Research Design and Methodology ....................................... 153
v
CHAPTER IV: Results of the Study
4.0. Introduction ...........................................................................................................................154
4.1. Description of Sample...........................................................................................................154
4.2. Analysis of Data and Interpretation ......................................................................................155
4.2.1. Effect of Explicit Instruction in Critical Thinking on Student Achievement in Writing
Academic Essays ........................................................................................................155
Research Question and Related Hypothesis ...............................................................155
4.2.1.1. Descriptive Statistics......................................................................................156
4.2.1.2. Independent-Samples T-Test .........................................................................158
4.2.2. Effect of Explicit Instruction in Critical Thinking on Student Achievement in General
Critical Thinking Ability............................................................................................159
Research question and related Hypothesis ................................................................ 159
4.2.2.1. Descriptive Statistics.....................................................................................159
4.2.2.2. Independent Samples T-Test........................................................................ 162
4.2.3. Effect of Explicit Instruction in Critical Thinking on Component Critical Thinking
Performance and Strengths in the Ennis-Weir ..........................................................162
Research Question and Related Hypothesis .............................................................162
4.2.3.1. Descriptive Statistics.................................................................................. 163
4.2.3.2. Independent Samples T-Test........................................................................165
4.2.4. Effect of Explicit Instruction in Critical Thinking on Student Achievement in
Dispositions toward Critical Thinking .......................................................................169
Research Question and Related Hypothesis ..............................................................169
4.2.4.1. Descriptive Statistics.....................................................................................170
4.2.4.2. Independent-Samples T-Test ........................................................................172
4.2.5. Correlations .................................................................................................................174
Summary of Results ............................................................................................. 175
vi
CHAPTER V: Discussion of Findings, Conclusions, and Recommendations
5.0 Introduction ...........................................................................................................................176
5.1 Discussion of the Results ......................................................................................................176
5.1.1. Research Question One ...............................................................................................176
5.1.2. Research Question Two ..............................................................................................178
5.1.3. Research Question Three ............................................................................................182
5.1.4. Research Question Four ..............................................................................................187
5.1.5. Relationships among Achievement on the three instruments .....................................192
5.2. Summary of Findings ............................................................................................................193
5.3. Implications for Practice .......................................................................................................195
5.4. Limitations ...........................................................................................................................197
5.5. Recommendations ................................................................................................................198
References ............................................................................................................. 199
Appendices ............................................................................................................ 219
vii
LIST OF TABLES
Table 2.1: Undergraduate Faculty Assessments of Some Writing Tasks
Table 2.2: Graduate Faculty Assessments of Some Writing Tasks
Table 2.3: Questions to Fire up Students’ Critical Thinking Skills
Table 2.4: Numrich’s Sequence of Critical Thinking Skills Tasks
Table 3.1: Recommended Performance Assessments for the CCTST Scale Scores
(100-point versions).
Table 3.2: Descriptions of Recommended Performance Assessment of the CCTST
Overall Scores
Table 3.3: Internal Consistency Reliability for CCTDI
Table 3.4: Comparison of Instructional Methods and Materials for Experimental and
Control Groups
Table 4.1: Distribution of Mean Pretest and Posttest Scores by Group and Research
Instruments
Table 4.2: Distribution of Mean Pretest and Posttest scores on Academic Essay Writing
Skills Test
Table 4.3: Analysis of the Independent Samples Test for achievement in academic essay writing
skills test at the posttest of the experiment between experimental and control groups
Table 4.4: Distribution of Mean Pretest and Posttest Scores on students’ abilities to Think
Critically on Ennis-Weir Critical Thinking Essay Test.
Table 4.5: Analysis of the Independent Samples Test for achievement in critical thinking
ability test between Experimental and Control groups at the posttest of the
experiment
Table 4.6: Distribution of the mean pretest and posttest scores (both scale and overall)
in five components of critical thinking performance in the Ennis-Weir Critical
Thinking Essay Test.
viii
Table 4.7: Analysis of Independent Samples t-test for achievement in component critical
thinking skills (both scale and overall) between Experimental and Control Groups
at the Posttest of the experiment
Table 4.8: Analysis of the Frequency distribution of student strengths/weaknesses in CTST
scale scores between the experimental (N=43) and control (N=41) groups at
the posttest of the experiment.
Table 4.9: Distribution of Mean Pretest and Posttest Scores on Dispositions toward using
Critical thinking Skills.
Table 4.10: Analysis of Independent-samples t-test for achievement in dispositions toward
using critical thinking between Experimental and Control Groups at the Posttest
of the experiment.
Table 4.11: Correlation Matrix for outcome Variables
ix
LIST OF FIGURES
Figure 3.1: Untreated Control Group Design with Pretest and Posttest
Figure 4.1: Distribution of mean pretest academic writing skills test total scores between
experimental and control groups.
Figure 4.2: Distribution of mean posttest academic writing skills test total scores between
experimental and control groups.
Figure 4.3: Distribution of mean pretest Critical Thinking Ability Test total score in EnnisWeir CTET between experimental and control groups.
Figure 4.4: Distribution of mean posttest critical thinking ability test total scores in the EnnisWeir Critical Thinking Essay Test between Experimental and control groups.
Figure 4.5: Distribution of mean pretest overall scores on component critical thinking
performance on the Ennis-Weir Critical Chinking Essay Test.
Figure 4.6: Distribution of mean posttest total scores on component critical thinking
performance in Ennis-Weir Critical Thinking Essay Test.
Figure 4.7: Distribution of mean pretest CTD total scores between experimental and control
groups
Figure 4.8: Distribution of mean posttest CTD overall scores between experimental and
control groups
x
Abstract
The purpose of this study was to examine empirically the effectiveness of explicit instruction in
critical thinking on university undergraduate students’ abilities (1) to think critically about writing
their academic essays, (2) to think critically about everyday issues, and (3) dispositions toward
critical thinking. Two sections of AAU students taking academic writing course participated in this
study. Two intact sections were randomly assigned to serve as the experimental and control groups.
The experimental group (n=43) received approximately 13weeks of explicit instruction in Critical
Thinking, The control group (n=41) was taught in a more traditional manner (implicitly) as opposed
to explicit instruction. Students in both groups took three pre-and-post-instruction tests to measure
the effectiveness of the instructional techniques. Thus, (1) academic argumentation and argument
development essay writing test, (2) the Ennis-Weir Critical Thinking Essay Test, and (3) the
California Critical Thinking Dispositions Inventory (CCTDI) were used as outcome measures. A 2group pretest/posttest quasi-experimental design was employed to determine the outcome measures.
Descriptive and Independent-samples t-test statistics were used to analyze the data from each
instrument. Significant differences were obtained between experimental and control groups in each
of the three instruments (total scores). Though apparent differences were found between
experimental and control groups with some of the critical thinking skills (scale scores) and
dispositional aspects (scale scores), such differences were not statistically significant. The
experimental group scored significantly higher on the Academic Writing Skills Test (p<0.05), the
Ennis-Weir Critical Thinking Essay Test (p<0.05), Component Critical Thinking Skills Overall
(p<0.05), and the California Critical Thinking Dispositions Inventory (CCTDI) Overall (p<0.05)
indicating large effect sizes (Cohen’s d=1.46, 2.16, 0.95, and 0.81) for each instrument. Statistical
tests, however, did not show significant differences on some component skills of Critical Thinking
and some dispositional aspects of California Critical Thinking Disposition Inventory scale scores.
Three major findings emerged from this study: Students’ abilities to think critically, performance in
writing academic papers, and dispositions toward critical thinking were improved by the
instructional techniques. These significant changes in students’ achievement at the end of a
semester-long instructional treatment would suggest that the technique may provide an effective
strategy for building critical thinking skills and learning in the classroom. In the light of these
findings, it is important that continuing effort be made to monitor the effects of integrating explicit
instruction in Critical Thinking into learning academic subjects (contents).
xi
ACKNOWLEDGEMENTS
First of all, I gratefully acknowledge my deep indebtedness to my research supervisor and advisor,
Dr Geremew Lemu, who over the years has kindly provided me with valuable comments,
corrections, suggestions, and whose ideas this study was in the first place.
I am very grateful to Jimma University for the sponsorship I have been given to undertake this
project.
I also wish to thank Addis Ababa University administration staff, particularly the department of
Foreign Languages and Literature, for the support and understanding, which has been vital for the
completion of this thesis.
I am indebted to foreign language educators at AAU, Dr. Getaneh Mehari, Bereket Abebe(M.A),
Kibur Engida(M.A) and undergraduate students in the departments of Civil Engineering, mechanical
engineering, and Economics, who participated in this study. Without their commitment and
enthusiasm, it would have been impossible to carry out this study.
I am indebted to my long-suffering friend of many years in this great adventure, Dr. Andinet
Shimelis, I appreciate his encouragement, support, and assistance. Many thanks also to Ato Yimam
Workineh(MA) and Wondimagegn(MA), both from Jimma University, for their contributions as
raters for the instructor made test, the Ennis-Weir, and the CCTDI respectively.
Lastly, I wish to thank my whole family, particularly, Sintayehu Fekadu, W/ro Belayinesh
Belachew, Tsega Amare, Thewodros Belachew(Pharma D), Solomon Amare, Daniel Tasew,
Sophonias, Misgana, and Kale-ab Adege, Teshome Moges, Tadesse Waktola, and other friends and
loved ones who have so graciously allowed me to share ideas, finance and moral support during my
stay at Addis Ababa University.
My sincere thanks also go to W/ro Hilina Dejene, who typed the whole manuscript, which was by no
means an easy task. My thanks are also due to ILS-Library staff for very kindly helping me with
required reference materials.
i
Table of Contents
Page
ACKNOWLEDGEMENTS ............................................................................................... i
TABLE OF CONTENTS ....................................................................................................ii
LIST OF TABLES ...............................................................................................................viii
LIST OF FIGURES .............................................................................................................x
ABSTRACT ..........................................................................................................................xi
CHAPTER I: Introduction
1.1. Background of the Study ................................................................................................1
1.2. Statement of the Problem ................................................................................................6
1.3. Objectives of the Study ...................................................................................................11
1.4. Significance of the Study ................................................................................................13
1.5. Operational Definitions of Terms ...................................................................................14
1.6. Scope of the Study………………………………………………………………………15
1.7. Limitations of the Study..................................................................................................16
CHAPTER II: Review of Related Literature
2.0. Introduction .............................................................................................................................17
2.1. Critical Thinking .....................................................................................................................17
2.1.1. Defining Critical Thinking ...........................................................................................17
2.1.1.1. The Philosophical Approach ............................................................................18
2.1.1.2. The Cognitive Psychological Approach ..........................................................21
2.1.1.3. The Educational Approach ..............................................................................23
2.1.1.4. Attempts at Consensus: the APA Delphi Study Definition .............................23
2.1.1.5. Critical Thinking Definitions Adopted for This Study ....................................25
2.2. Can Critical Thinking be Taught? ..........................................................................................27
2.3. The Importance of Learning to Think Critically.................................................................... 28
2.4. Theoretical Perspective Underpinning the Teaching of Critical Thinking .............................32
2.5. The Need for Explicit Instruction in Critical Thinking ..........................................................34
2.6. Approaches to Teaching Critical Thinking .............................................................................36
ii
2.6.1. The General Approach ............................................................................................... 37
2.6.2. The Infusion Approach ................................................................................................37
2.6.3. The Immersion Approach ............................................................................................38
2.6.4. The Mixed Approach ...................................................................................................38
2.7. Critical Thinking Teaching Techniques/Instructional Strategies ...........................................41
2.8. Attributes a Critical Thinker Needs to have ...........................................................................46
2.8.1. Critical Thinking Skills ................................................................................................46
2.8.2. Critical Thinking Dispositions .....................................................................................49
2.9. Teaching Critical Thinking for Transfer across Domains .....................................................50
2.9.1. Dispositions for Effortful Thinking and Learning .......................................................51
2.9.2. A Skills Approach to Critical Thinking .......................................................................53
2.9.3. Structure Training to Promote Transfer .......................................................................56
2.9.4. Metacognitive Monitoring ...........................................................................................58
2.10. The Relationship of Critical Thinking to Other concepts .....................................................58
2.10.1. Metacognition ...........................................................................................................59
2.10.2. Motivation .................................................................................................................60
2.10.3. Creativity...................................................................................................................61
2.11. Critical Thinking Assessment ...............................................................................................61
2.11.1. Purposes of Critical Thinking Assessment ...............................................................61
2.11.2. Approaches to Critical Thinking Assessment ...........................................................62
2.11.2.1. Commercially Available Critical Thinking Tests ......................................63
2.11.2.2. Alternatives to Commercial Instruments ...................................................65
2.11.3. Measuring Critical Thinking: Some Considerations ................................................66
2.11.4. Measuring Critical Thinking Dispositions ................................................................68
2.11.4.1. Techniques of Evaluating Critical Thinking Dispositions .........................70
2.11.4.1.1 Direct Observation ....................................................................71
2.11.4.1.2. Rating Scales ............................................................................72
2.11.4.1.3. Learner Self-Assessment .........................................................72
2.11.4.1.3.1. Surveys/ Questionnaires......................................72
iii
2.11.4.1.3.2. Reflective Learning Logs ........................................73
2.11.4.1.4. Essay Tests .................................................................................74
2.12. Academic EFL Writing Skills ...............................................................................................75
2.12.1. Teaching Academic ESL/EFL Writing Skills............................................................77
2.12.2. The Importance of Teaching Academic Writing in the University ...........................79
2.12.3. Most Important Characteristics of Academic Writing ...............................................80
2.12.4. Most Common Student Written Academic Assignments and Tasks .........................82
2.12.4.1. Major Writing Assignments ........................................................................82
2.12.4.2. Medium-Length Essays and Short Written Tasks ......................................83
2.12.4.3. English Composition Writing Tasks ...........................................................83
2.12.5. Essential Features of Academic Text and Importance of Teaching Them ................84
2.12.5.1. Features of Academic Genre and Text.........................................................84
2 .12.5.2. Teaching Academic Text Features..............................................................85
2.12.6. Types of Academic Writing Tasks in Commonly Required Academic Courses ...... 88
2.12.6.1. Most Common Types of Academic Writing Tasks .....................................88
2.12.6.2. Less Common Writing Tasks ......................................................................89
2.12.6.3. Less Common Rhetorical and Writing Tasks ..............................................90
2.13. Critical Thinking and L2/Foreign Language Learning .........................................................91
2.13.1. Critical Thinking and Academic EFL/ESL Writing .................................................92
2.13.2. The Relationship of Critical Thinking to Creative Thinking and Thoughtful
Writing ....................................................................................................................94
2.13.3. Teaching Critical Thinking in Academic EFL Writing Classes ...............................96
2.13.3.1. Techniques in Using Critical Thinking Questioning ................................98
2.13.3.2. Techniques in Using Critical Reading ........................................................ 105
2.13.3.3. Techniques in Using Critical Thinking Skills Tasks ................................106
Summary of Literature Review .......................................................................... 109
iv
CHAPTER III: The Research Design and Methodology
3.0. Introduction ...........................................................................................................................110
3.1. The Research Design ............................................................................................................110
3.1.1. Quasi-experimentation ..............................................................................................110
3.1.2. The Study Variables ...................................................................................................115
3.2. The Research Paradigm ........................................................................................................115
3.3. Institutional Setting ...............................................................................................................118
3.4. The Research Participants .....................................................................................................118
3.5. The outcome Measures .........................................................................................................119
3.5.1. The Instructor/Researcher-developed Essay Test ......................................................119
3.5.2. The Ennis-Weir Critical Thinking Essay Test ............................................................122
3.5.2.1. General Critical Thinking Performance Assessment .....................................122
3.5.2.2. Component Critical Thinking Performance Assessment ...............................127
3.5.3. The California Critical Thinking Dispositions Inventory (CCTDI) ...........................130
3.5.3.1. The Seven CCTDI Dispositional Scales ........................................................131
3.5.3.2. The Validity and Reliability of the CCTDI ...................................................132
3.6. The Rating Scale for Test Instruments..................................................................................135
3.6.1. The Instructor-developed Test ....................................................................................135
3.6.2. The Ennis-Weir Critical Thinking Essay Test ............................................................135
3.6.3. The California Critical Thinking Dispositions Inventory (CCTDI) ...........................136
3.7. The Research Hypothesis ......................................................................................................136
3.8. The Experimental Procedure.................................................................................................137
3.9. Instructional Method and Materials ......................................................................................139
3.9.1. Experimental Group ....................................................................................................139
3.9.2. The Control Group ......................................................................................................145
3.10. Method of Data Analysis ....................................................................................................147
3.11. A Short Summary of the Pilot Study ..................................................................................149
Summary of the Research Design and Methodology ....................................... 153
v
CHAPTER IV: Results of the Study
4.0. Introduction ...........................................................................................................................154
4.1. Description of Sample...........................................................................................................154
4.2. Analysis of Data and Interpretation ......................................................................................155
4.2.1. Effect of Explicit Instruction in Critical Thinking on Student Achievement in Writing
Academic Essays ........................................................................................................155
Research Question and Related Hypothesis ...............................................................155
4.2.1.1. Descriptive Statistics......................................................................................156
4.2.1.2. Independent-Samples T-Test .........................................................................158
4.2.2. Effect of Explicit Instruction in Critical Thinking on Student Achievement in General
Critical Thinking Ability............................................................................................159
Research question and related Hypothesis ................................................................ 159
4.2.2.1. Descriptive Statistics.....................................................................................159
4.2.2.2. Independent Samples T-Test........................................................................ 162
4.2.3. Effect of Explicit Instruction in Critical Thinking on Component Critical Thinking
Performance and Strengths in the Ennis-Weir ..........................................................162
Research Question and Related Hypothesis .............................................................162
4.2.3.1. Descriptive Statistics.................................................................................. 163
4.2.3.2. Independent Samples T-Test........................................................................165
4.2.4. Effect of Explicit Instruction in Critical Thinking on Student Achievement in
Dispositions toward Critical Thinking .......................................................................169
Research Question and Related Hypothesis ..............................................................169
4.2.4.1. Descriptive Statistics.....................................................................................170
4.2.4.2. Independent-Samples T-Test ........................................................................172
4.2.5. Correlations .................................................................................................................174
Summary of Results ............................................................................................. 175
vi
CHAPTER V: Discussion of Findings, Conclusions, and Recommendations
5.0 Introduction ...........................................................................................................................176
5.1 Discussion of the Results ......................................................................................................176
5.1.1. Research Question One ...............................................................................................176
5.1.2. Research Question Two ..............................................................................................178
5.1.3. Research Question Three ............................................................................................182
5.1.4. Research Question Four ..............................................................................................187
5.1.5. Relationships among Achievement on the three instruments .....................................192
5.2. Summary of Findings ............................................................................................................193
5.3. Implications for Practice .......................................................................................................195
5.4. Limitations ...........................................................................................................................197
5.5. Recommendations ................................................................................................................198
References ............................................................................................................. 199
Appendices ............................................................................................................ 219
vii
LIST OF TABLES
Table 2.1: Undergraduate Faculty Assessments of Some Writing Tasks
Table 2.2: Graduate Faculty Assessments of Some Writing Tasks
Table 2.3: Questions to Fire up Students’ Critical Thinking Skills
Table 2.4: Numrich’s Sequence of Critical Thinking Skills Tasks
Table 3.1: Recommended Performance Assessments for the CCTST Scale Scores
(100-point versions).
Table 3.2: Descriptions of Recommended Performance Assessment of the CCTST
Overall Scores
Table 3.3: Internal Consistency Reliability for CCTDI
Table 3.4: Comparison of Instructional Methods and Materials for Experimental and
Control Groups
Table 4.1: Distribution of Mean Pretest and Posttest Scores by Group and Research
Instruments
Table 4.2: Distribution of Mean Pretest and Posttest scores on Academic Essay Writing
Skills Test
Table 4.3: Analysis of the Independent Samples Test for achievement in academic essay writing
skills test at the posttest of the experiment between experimental and control groups
Table 4.4: Distribution of Mean Pretest and Posttest Scores on students’ abilities to Think
Critically on Ennis-Weir Critical Thinking Essay Test.
Table 4.5: Analysis of the Independent Samples Test for achievement in critical thinking
ability test between Experimental and Control groups at the posttest of the
experiment
Table 4.6: Distribution of the mean pretest and posttest scores (both scale and overall)
in five components of critical thinking performance in the Ennis-Weir Critical
Thinking Essay Test.
viii
Table 4.7: Analysis of Independent Samples t-test for achievement in component critical
thinking skills (both scale and overall) between Experimental and Control Groups
at the Posttest of the experiment
Table 4.8: Analysis of the Frequency distribution of student strengths/weaknesses in CTST
scale scores between the experimental (N=43) and control (N=41) groups at
the posttest of the experiment.
Table 4.9: Distribution of Mean Pretest and Posttest Scores on Dispositions toward using
Critical thinking Skills.
Table 4.10: Analysis of Independent-samples t-test for achievement in dispositions toward
using critical thinking between Experimental and Control Groups at the Posttest
of the experiment.
Table 4.11: Correlation Matrix for outcome Variables
ix
LIST OF FIGURES
Figure 3.1: Untreated Control Group Design with Pretest and Posttest
Figure 4.1: Distribution of mean pretest academic writing skills test total scores between
experimental and control groups.
Figure 4.2: Distribution of mean posttest academic writing skills test total scores between
experimental and control groups.
Figure 4.3: Distribution of mean pretest Critical Thinking Ability Test total score in EnnisWeir CTET between experimental and control groups.
Figure 4.4: Distribution of mean posttest critical thinking ability test total scores in the EnnisWeir Critical Thinking Essay Test between Experimental and control groups.
Figure 4.5: Distribution of mean pretest overall scores on component critical thinking
performance on the Ennis-Weir Critical Chinking Essay Test.
Figure 4.6: Distribution of mean posttest total scores on component critical thinking
performance in Ennis-Weir Critical Thinking Essay Test.
Figure 4.7: Distribution of mean pretest CTD total scores between experimental and control
groups
Figure 4.8: Distribution of mean posttest CTD overall scores between experimental and
control groups
x
Abstract
The purpose of this study was to examine empirically the effectiveness of explicit instruction in
critical thinking on university undergraduate students’ abilities (1) to think critically about writing
their academic essays, (2) to think critically about everyday issues, and (3) dispositions toward
critical thinking. Two sections of AAU students taking academic writing course participated in this
study. Two intact sections were randomly assigned to serve as the experimental and control groups.
The experimental group (n=43) received approximately 13weeks of explicit instruction in Critical
Thinking, The control group (n=41) was taught in a more traditional manner (implicitly) as opposed
to explicit instruction. Students in both groups took three pre-and-post-instruction tests to measure
the effectiveness of the instructional techniques. Thus, (1) academic argumentation and argument
development essay writing test, (2) the Ennis-Weir Critical Thinking Essay Test, and (3) the
California Critical Thinking Dispositions Inventory (CCTDI) were used as outcome measures. A 2group pretest/posttest quasi-experimental design was employed to determine the outcome measures.
Descriptive and Independent-samples t-test statistics were used to analyze the data from each
instrument. Significant differences were obtained between experimental and control groups in each
of the three instruments (total scores). Though apparent differences were found between
experimental and control groups with some of the critical thinking skills (scale scores) and
dispositional aspects (scale scores), such differences were not statistically significant. The
experimental group scored significantly higher on the Academic Writing Skills Test (p<0.05), the
Ennis-Weir Critical Thinking Essay Test (p<0.05), Component Critical Thinking Skills Overall
(p<0.05), and the California Critical Thinking Dispositions Inventory (CCTDI) Overall (p<0.05)
indicating large effect sizes (Cohen’s d=1.46, 2.16, 0.95, and 0.81) for each instrument. Statistical
tests, however, did not show significant differences on some component skills of Critical Thinking
and some dispositional aspects of California Critical Thinking Disposition Inventory scale scores.
Three major findings emerged from this study: Students’ abilities to think critically, performance in
writing academic papers, and dispositions toward critical thinking were improved by the
instructional techniques. These significant changes in students’ achievement at the end of a
semester-long instructional treatment would suggest that the technique may provide an effective
strategy for building critical thinking skills and learning in the classroom. In the light of these
findings, it is important that continuing effort be made to monitor the effects of integrating explicit
instruction in Critical Thinking into learning academic subjects (contents).
xi
ACKNOWLEDGEMENTS
First of all, I gratefully acknowledge my deep indebtedness to my research supervisor and advisor,
Dr Geremew Lemu, who over the years has kindly provided me with valuable comments,
corrections, suggestions, and whose ideas this study was in the first place.
I am very grateful to Jimma University for the sponsorship I have been given to undertake this
project.
I also wish to thank Addis Ababa University administration staff, particularly the department of
Foreign Languages and Literature, for the support and understanding, which has been vital for the
completion of this thesis.
I am indebted to foreign language educators at AAU, Dr. Getaneh Mehari, Bereket Abebe(M.A),
Kibur Engida(M.A) and undergraduate students in the departments of Civil Engineering, mechanical
engineering, and Economics, who participated in this study. Without their commitment and
enthusiasm, it would have been impossible to carry out this study.
I am indebted to my long-suffering friend of many years in this great adventure, Dr. Andinet
Shimelis, I appreciate his encouragement, support, and assistance. Many thanks also to Ato Yimam
Workineh(MA) and Wondimagegn(MA), both from Jimma University, for their contributions as
raters for the instructor made test, the Ennis-Weir, and the CCTDI respectively.
Lastly, I wish to thank my whole family, particularly, Sintayehu Fekadu, W/ro Belayinesh
Belachew, Tsega Amare, Thewodros Belachew(Pharma D), Solomon Amare, Daniel Tasew,
Sophonias, Misgana, and Kale-ab Adege, Teshome Moges, Tadesse Waktola, and other friends and
loved ones who have so graciously allowed me to share ideas, finance and moral support during my
stay at Addis Ababa University.
My sincere thanks also go to W/ro Hilina Dejene, who typed the whole manuscript, which was by no
means an easy task. My thanks are also due to ILS-Library staff for very kindly helping me with
required reference materials.
i