- ASU Digital Repository

A Mixed-Methods Examination of Influences on the Shape and Malleability of
Technological Pedagogical Content Knowledge (TPACK) in Graduate Teacher
Education Students
by
Kent Sabo
A Dissertation Presented in Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy
Approved June 2013 by the
Graduate Supervisory Committee:
Robert Atkinson, Chair
Leanna Archambault
Wilhelmina Savenye
ARIZONA STATE UNIVERSITY
August 2013
ABSTRACT
Concerted efforts have been made within teacher preparation programs to integrate
teaching with technology into the curriculum. Unfortunately, these efforts continue to fall
short as teachers' application of educational technology is unsophisticated and not well
integrated. The most prevalent approaches to integrating technology tend to ignore
pedagogy and content and assume that the technology integration knowledge for all
contexts is the same. One theoretical framework that does acknowledge content,
pedagogy, and context in conjunction with technology is Technological Pedagogical
Content Knowledge (TPACK) and was the lens through which teacher development was
measured and interpreted in this study. The purpose of this study was to investigate
graduate teacher education students' knowledge and practice of teaching with technology
as well as how that knowledge and practice changes after participation in an educational
technology course. This study used a mixed-methods sequential explanatory research
design in which both quantitative and qualitative data were gathered from 82 participants.
TPACK pre- and postcourse surveys were administered to a treatment group enrolled in
an educational technology course and to a nonequivalent control group enrolled in a
learning theories course. Additionally, pre- and postcourse lesson plans were collected
from the treatment group. Select treatment group participants also participated in phone
interviews. Analyses compared pre- and post-course survey response differences within
and between the treatment and control groups. Pre- and postlesson plan rubric score
differences were compared within the treatment group. Quantitative text analyses were
performed on the collected lesson plans. Open and axial coding procedures were
followed to analyze interview transcripts. The results of the study revealed five
i
significant findings: 1) graduate students entering an educational technology course
reported lower ability in constructs related to teaching with technology than in constructs
related to teaching in a traditional setting; 2) TPACK was malleable and TPACK
instruments were sensitive to that malleability; 3) significant gains in reported and
demonstrated TPACK constructs were found after participating in an educational
technology course; 4) TPACK construct ability levels vary significantly by participant
characteristics; and 5) influences on teaching knowledge and practice range from internet
resources, to mentor teachers, and to standardized curriculum packages.
ii
DEDICATION
I dedicate my dissertation to my family for the relentless avalanche of support
they heap on me through all my endeavors. I’m proud to follow in my father’s academic
footsteps and if I had my mother’s work ethic, I would have finished this project a long
time ago. I hope to pursue my goals with the commitment my brother has show in pursuit
of his. I’m certain I haven’t earned the pride my grandmother displays for me, but I might
make up some ground with this degree.
iii
ACKNOWLEDGMENTS
I would like to thank my dissertation committee for their expertise,
encouragement, and guidance throughout my doctoral studies. I am indebted to my chair,
Dr. Robert Atkinson, for giving me the opportunity to work on his lab’s funded research
projects and to help prepare grants when I was in a zone of proximal development (I
knew just enough to be dangerous). It is due to my experiences working with him that I
gained practical knowledge, skills, and abilities, which resulted in conference
presentations, a book chapter, and a journal article. Dr. Leanna Archambault shaped my
understanding of TPACK and introduced me to luminaries in the field. I am grateful for
the access to her expertise, graduate students, and her TPACK instrument that made this
dissertation possible. She is my model of a successful researcher in the field of
technology and teacher education. Dr. Wilhelmina Savenye’s expertise and instruction in
educational evaluation helped pave the way to my current professional position. She is
also the champion of every educational technology graduate student and I think many of
us persist largely due to her encouragement.
I would also like to thank my partner, Dr. Laura Naumann, for her emotional
support and her familiarity with academia. A partner that can help you keep the dream
alive and has intimate knowledge of what it takes to make the dream a reality is truly
invaluable. It is that unique combination that bolstered my belief that I could be
successful in this business.
Finally, I would like to thank my educational technology program colleagues who
have taken this journey with me. I developed lifelong friends and research collaborators
and I appreciate all the advice, feedback, and words of encouragement that we shared.
iv
TABLE OF CONTENTS
Page
LIST OF TABLES ................................................................................................................. vii
LIST OF FIGURES .................................................................................................................ix
CHAPTER
1 INTRODUCTION ............................................................................................... 1
General Problem .............................................................................................. 1
Technological Pedagogical Content Knowledge (TPACK) ........................... 3
TPACK Measurement ..................................................................................... 8
TPACK Criticism........................................................................................... 17
Graduate Teacher Education.......................................................................... 18
Overview of Present Study ............................................................................ 19
2 METHOD .......................................................................................................... 24
Participants ..................................................................................................... 24
Power Analysis .............................................................................................. 26
Educational Technology Course.................................................................... 27
Learning Theory Course ................................................................................ 29
Instruments and Data Sets.............................................................................. 30
Procedure ........................................................................................................ 34
3 RESULTS .......................................................................................................... 37
TPACK Levels ............................................................................................... 38
TPACK Change ............................................................................................. 48
Survey and Rubric Construct Score Correlations ......................................... 55
v
CHAPTER
Page
Lesson Plan Content ...................................................................................... 58
TPACK Differences by Participant Characteristics ...................................... 67
Teaching and Learning Experiences ............................................................. 87
4 DISCUSSION .................................................................................................... 96
Discussion of the Main Purpose .................................................................... 96
Limitations ...................................................................................................111
Implications ..................................................................................................112
Reccomendations .........................................................................................114
Future Research Directions..........................................................................115
Conclusion....................................................................................................118
REFERENCES ................................................................................................................... 121
APPENDIX
A
TPACK Survey ............................................................................................ 127
B
TPACK Rubric.............................................................................................. 132
C
Interview Script ............................................................................................ 135
D
IRB Approval ............................................................................................... 137
vi
LIST OF TABLES
Table
Page
1.
Sample size estimate results from a power analysis for dependent samples ......... 27
2.
Research Questions and Analytic Approaches ....................................................... 37
3.
Mean TPACK Presurvey Item Scores .................................................................... 39
4.
Mean TPACK Presurvey Construct Scores ............................................................ 40
5.
Mean TPACK Postsurvey Item Scores .................................................................. 41
6.
Mean TPACK Postsurvey Construct Scores .......................................................... 42
7.
Mean TPACK Prelesson Plan Rubric Item Scores ................................................ 45
8.
Mean TPACK Prelesson Plan Rubric Construct Scores ........................................ 45
9.
Mean TPACK Postlesson Plan Rubric Item Scores ............................................... 47
10.
Mean TPACK Prelesson Plan Rubric Construct Scores ........................................ 47
11.
Item and Construct Mean Difference from Treatment Group Pre- to
Postsurvey ............................................................................................................... 49
12.
Item and Construct Mean Difference from Control Group Pre- to
Postsurvey ............................................................................................................... 51
13.
Treatment and Control Group Presurvey Construct Score Differences ................ 53
14.
Item and Construct Mean Difference from Pre- to Postcourse Rubric .................. 54
15.
Presurvey with Precourse Rubric Correlations ....................................................... 56
16.
Postsurvey with Postcourse Rubric Correlations ................................................... 56
17.
Presurvey Construct Correlations ........................................................................... 57
18.
Postsurvey Construct Correlations .......................................................................... 57
19.
Precourse Rubric Construct Correlations ............................................................... 58
vii
Table
Page
20.
Post-course Rubric Construct Correlations ............................................................ 58
21.
Pre- and Post-lesson Plan Noun, Verb, and Technology Word Frequencies ........ 61
22.
Collocation Table for Student ................................................................................. 62
23.
Collocation Table for Use ........................................................................................ 62
24.
Collocation Table for Video ................................................................................... 63
25.
Collocation Table for Computer ............................................................................. 63
26.
Collocation Table for Technology .......................................................................... 64
27.
Gender Differences in Pre- and Postsurvey TPACK Constructs ........................... 70
28.
Mean Score Differences Among Knowledge of Pedagogy Groups on
TPACK Construct Scores ....................................................................................... 68
29.
Mean Score Differences Among Knowledge of Educational
Technology Groups on TPACK Construct Scores ................................................ 72
30.
Bivariate and Partial Correlations of the Predictors with PK Presurvey Score ..... 76
31.
Bivariate and Partial Correlations of the Predictors with PCK Presurvey Score .. 77
32.
Bivariate and Partial Correlations of the Predictors with TPACK Presurvey
Score ........................................................................................................................ 77
33.
Bivariate and Partial Correlations of the Predictors with PK Postsurvey Score ... 75
34.
Bivariate and Partial Correlations of the Predictors with CK Postsurvey Score ... 78
35.
Bivariate and Partial Correlations of the Predictors with TPACK Postsurvey
Score ......................................................................................................................... 79
36.
Gender Differences in Pre- and Postlesson Plan Rubric Construct Scores ........... 85
37.
Quantitative and Qualitative Phase: Joint Display ................................................. 97
viii
LIST OF FIGURES
Figure
Page
1.
The TPACK framework ..................................................................................... 8
2.
Co-occurrence Network of the Prelesson Plan Corpus ................................... 66
3.
Co-occurrence Network of the Postlesson Plan Corpus ................................. 67
ix
Chapter 1
INTRODUCTION
General Problem
Investment in educational technology infrastructure, training, and software
continue to grow as some question the value of technology in education (Lemke,
Coughlin, & Reifsneider, 2009). The expectations for how technology can and will
transform education have long been high (Wellings & Levine, 2009). As far back as 1913
when motion picture projectors were first introduced in schools, Thomas Edison
predicted, “Books will soon be obsolete in the schools. . . . It is possible to teach every
branch of human knowledge with the motion picture. Our school system will be
completely changed in the next ten years” (as cited in Reiser, 2001, p. 55). Throughout
the history of educational technology, proclamations like Edison's have made it difficult
for the empirically measured effects of technology to stand up to outsized expectations
(Reiser, 2001). Technologists and educators have been too confident that the significant
institutional change required to reap the benefits of technology would be easily
accomplished, and over time, there has been a lack of documentation on implemented
technologies' impact on “student learning, teacher practice and system efficiencies”
(Lemke et al., 2009, p. 5). Regardless of the lofty expectations and implementation
issues, educational technology is contributing to student learning. In a meta-analysis of
educational technology, researchers found that across the 15 types of technologies
reviewed—from classroom response systems, to interactive whiteboards, and to virtual
worlds—all have “primarily promising effects” on learning across content areas (Lemke
et al., 2009, p. 7)
1
A lack of vision, access to research, leadership, teacher proficiency in integrating
technology in learning, and professional development continue to be significant barriers
to realizing the potential of currently implemented technologies for teaching and learning
(Brown, 2006). Teachers need to know how to teach effectively with technology and are
expected to do so prior to completing their teacher preparation program, despite the
complex knowledge required for success (Hofer & Grandgenett, 2012). Furthermore,
teachers are now expected to teach with technology by governmental and educational
organizations through a variety of mandates and initiatives; however, studies have found
that teachers do not use technology effectively in their teaching (Brown, 2006).
Technologies are often used in ways that maintain existing practices in teaching, and it is
most often used in computer education courses, vocational education, exploratory use in
elementary school, and word processing (Brown, 2006). The lack of expertise in teaching
with technology has been suggested as the limiting factor in the effectiveness of
technology in teaching and learning, and teachers that have above-average technical skills
and use computers for professional purposes teach with technology in more broad and
sophisticated ways (Brown, 2006). This suggests that teacher preparation programs and
in-service professional development programs are missing effective instruction and
training in teaching with technology.
Concerted efforts have been made within teacher preparation programs to
integrate teaching with technology into the curriculum. For example, the United States
Department of Education announced the Preparing Tomorrow's Teachers to Use
Technology grant program in 1999 and the program awarded more than 400 grants over
five years, totaling $337.5 million (United States Department of Education). These efforts
2
to integrate technology into teacher education programs continue to fall short as teachers'
application of educational technology is unsophisticated and not well integrated (Brown,
2006; Harris, Mishra, & Koehler, 2009). Teachers are using technologies as “efficiency
aids and extension devices” rather than “transformative devices” (Harris et al., 2009, p.
394). The majority of the most prevalent approaches to integrating technology into
education are techno-centric in that they focus first on the affordances and constraints of
the technologies and technological skill rather than on students’ learning needs. These
prevalent approaches include: (a) software-focused initiatives; (b) demonstrations of
sample resources, lessons, and projects; (c) technology-based educational reform efforts;
(d) structured/standardized professional development workshops or courses and; (e)
technology-focused teacher education courses (Harris et al., 2009). These five approaches
tend to ignore pedagogy and content and assume that the technology integration
knowledge for all contexts is the same. However, frameworks, practices, and
pedagogical strategies vary across content areas such as science, literacy, and the arts.
Approaches that do not account for these differences are limited in their effectiveness
across different contexts. One theoretical framework that does acknowledge content,
pedagogy, and context in conjunction with technology is Technological Pedagogical
Content Knowledge (TPACK) and is the lens through which teacher development is
measured and interpreted in this study (Harris et al., 2009).
Technological Pedagogical Content Knowledge (TPACK)
TPACK is a theoretical framework for understanding and describing the
knowledge teachers require to effectively teach with technology and is the framework for
this study (Mishra & Koehler, 2006). Mishra and Koehler (2006) defined TPACK as:
3
The basis of good teaching with technology and requires an understanding of the
representation of concepts using technologies; pedagogical techniques that use
technologies in constructive ways to teach content; knowledge of what makes
concepts difficult or easy to learn and how technology can help redress some of
the problems that students face; knowledge of students' prior knowledge and
theories of epistemology; and knowledge of how technologies can be used to
build on existing knowledge and to develop new epistemologies or strengthen old
ones. (p. 1029)
TPACK is an extension of Shulman's (1986) construct, Pedagogical Content Knowledge
(PCK), which includes technology knowledge with content and pedagogical knowledge
(Schmidt et al., 2009a). PCK describes the relationship between pedagogy (teaching
strategies) and content (subject-matter knowledge) (Archambault & Barnett, 2010). PCK
was developed by Shulman (1986) in response to the need for a theoretical framework
that coherently explained the complex nature of teacher understanding and knowledge
transmission. At the time he developed PCK, Shulman felt that educational research and
policy had become too focused on pedagogy alone, to the detriment of content
knowledge. He felt it was a mistake for education stakeholders to focus exclusively on
generic teacher pedagogical practices like classroom management, lesson planning, and
activity organization. Shulman was concerned that questions about the content teachers
delivered, the questions they asked, and the explanations they provided were not being
answered.
Initially, Shulman (1986) delineated three categories of content knowledge:
subject matter content knowledge, pedagogical content knowledge, and curricular
4
knowledge. Content knowledge referred to the volume and organization of knowledge a
teacher had in a particular content area. PCK was a type of content knowledge that “goes
beyond knowledge of subject matter per se to the dimension of subject matter knowledge
for teaching,” and “that embodies the aspects of content most germane to its teachability”
(Shulman, 1986, p. 9). Furthermore, he explained that PCK was knowledge of “the ways
of representing and formulating the subject that make it comprehensible to others”
(Shulman, 1986, p. 9). He distinguished PCK from the pedagogical knowledge of
teaching, which included knowledge of the generic principles of teaching that had
become the focus of much educational research. PCK is the type of knowledge that can
distinguish between a content specialist and a teacher of that same content (GessNewsome, 1999). The third category of content knowledge, curricular knowledge, was
knowledge of the programs and materials available to teach a certain subject and the
characteristics of those programs and materials that make them suitable or unsuitable in a
particular context (Shulman, 1986).
The following year, Shulman promoted PCK from a subcategory of content
knowledge by including it as one of the seven knowledge bases for teaching (GessNewsome, 1999). The seven knowledge bases included “content knowledge, general
pedagogical knowledge, curricular knowledge, knowledge of learners, knowledge of
educational contexts, and knowledge of the philosophical and historical aims of
education” (Gess-Newsome, 1999, p. 4). Grossman (1990), who was a former graduate
student under Shulman, refined the seven knowledge bases into four bases that included:
“general pedagogical knowledge, subject matter knowledge, pedagogical content
knowledge and knowledge of context” (p. 5). PCK was hypothesized to influence
5
teachers' actions in the classroom the most among the four knowledge bases (GessNewsome, 1999).
Similar to Shulman (1986), Mishra and Koehler (2006) perceived an imbalanced
focus within teaching and teacher education. They felt that teacher education and training
as well as accepted teacher practices in teaching with technology focused only on the
technology and not on how the technology was implemented. Within these systems,
knowledge of technology is treated as separate from pedagogical and content knowledge.
In the same way that PCK merged what were previously regarded as independent
constructs (pedagogical knowledge and content knowledge), TPACK integrates
knowledge of technology with knowledge of pedagogy and content. This framework
suggests that simply adding technology to an educational context is inadequate and
ineffective. A possible explanation for the techno-centric implementation of technology
into teaching was the lack of a theoretical framework to guide implementation and
understanding of the practice. Mishra and Koehler (2006) developed and refined TPACK
over the course of five years by conducting design experiments to both understand and
help develop teachers’ effective use of technology. They contend that successful
curriculum design is guided by theoretical frameworks that arrange learning and
knowledge construction principles that serve as a foundation for a cogent and contextual
learning experience (Mishra & Koehler, 2006). TPACK can serve as a framework for
teacher knowledge, the design for instruction to enhance that knowledge, and a
framework for research.
The TPACK framework includes three components of knowledge (technology,
pedagogy, and content) and describes the interaction between the three components
6
resulting in seven constructs displayed in Figure 1. The constructs include: technology
knowledge, content knowledge, pedagogical knowledge, pedagogical content knowledge,
technological content knowledge, technological pedagogical knowledge, and
technological pedagogical content knowledge. These constructs are defined as follows:
1.
Technological knowledge (TK) is knowledge about various technologies.
2.
Content knowledge (CK) is knowledge about subject matter.
3.
Pedagogical knowledge (PK) is knowledge of teaching processes, methods, and
strategies.
4.
Pedagogical content knowledge (PCK) is knowledge of teaching processes,
methods, and strategies for a specific subject matter.
5.
Technological content knowledge (TCK) is knowledge of technology's ability to
change how content is represented.
6.
Technological pedagogical knowledge (TPK) is knowledge of how technologies
can be used to teach.
7.
Technological pedagogical content knowledge (TPACK) is knowledge of
integrating technology into teaching a specific subject matter (Schmidt et al.,
2009a).
7
Figure 1: The TPACK framework. Reproduced by permission of the publisher, © 2012
by tpack.org.
TPACK Measurement
In assessing teachers' technology use, researchers have focused on teachers'
attitudes and perceptions of and skills in using technology (Browne, 2009; Christensen &
Knezek, 2009; Curts, Tanguma, & Pena, 2008; Franklin, 2007; Hogarty, Lang, &
Kromrey, 2003; MacDonald, 2009). These measures gather important data but do not
indicate teachers' knowledge specific to effective technology integration, nor do they
measure teachers' practice. Instruments and methods that measure knowledge of effective
teaching with technology are few, but the number is growing. Many of these instruments
and methods were designed using the TPACK framework. Archambault and Crippen
(2009) created a questionnaire to examine TPACK among kindergarten through twelfth
grade (K-12) online educators, Schmidt et al. (2009a) created a questionnaire to measure
TPACK in preservice teachers and Graham et al. (2009) created a questionnaire to
examine the technology constructs within TPACK. Instruments that measure the practice
8
of effective technology integration are even rarer. Harris, Grandgenett, and Hofer (2010)
and Akcaoglu, Kereluik, and Casperson (2011) developed TPACK-based rubrics to assess
technology-infused lessons and lesson plans. Kramarski and Michalsky (2010) used both
a rubric and a self-report survey in their study of TPACK. The following researchers have
also investigated TPACK through qualitative research: Groth, Spickler, Bergner, and
Bardzell (2009) and Brantley-Dias, Kinuthia, Shoffner, de Castro, and Rigole (2007).
Survey instruments. Archambault and Crippen (2009) developed one of the first
survey instruments to measure TPACK in K-12 online teachers. The development of the
survey included two rounds of think-aloud pilots with online teachers and content
validation through expert review. The root survey question asks respondents how they
would rate their knowledge on various online teaching tasks. Item examples include, “my
ability to create materials that map to specific district/state standards, my ability to
distinguish between correct and incorrect problem solving attempts by students, and my
ability to create an online environment which allows students to build new knowledge
and skills” (Archambault & Crippen, 2009, p. 88).
Of the 1,795 online teachers who were sent the survey, 596 teachers from 25
states responded. Internal consistency measures for the survey's seven sub-scales ranged
from 0.70 to 0.89. Archambault and Crippen (2009) found that the online teachers rated
their knowledge of pedagogy, content, and pedagogical content the highest, which
suggests that the teachers were confident in a variety of teaching strategies, creating
materials, and planning the scope and sequence of topics. It also suggested that the
teachers could recognize student misconceptions and judge students' problem solving
techniques. The teachers reported knowledge levels dropped by 0.81 from pedagogy and
9
content to technology. This suggested that the teachers were less confident with
technology than with pedagogy and content. Respondents were confident in their
traditional teacher knowledge but were less so when integrating technology. The
researchers found that the correlations between TPACK constructs (pedagogy and
content, pedagogical content and content, pedagogical content and pedagogy,
technological content and technological pedagogy, technological pedagogical content and
both technological pedagogy and technological content) were high, which suggested that
the constructs may not be distinct or that the survey items were confounded
(Archambault & Crippen, 2009).
Schmidt et al. (2009b) also developed an early TPACK survey instrument. The
purpose of Schmidt et al.’s instrument was to measure preservice teachers' assessment of
their TPACK. The researchers established content validity by reviewing the TPACK
literature and had the items reviewed by experts. Participants in the survey development
and validation study were 124 preservice teachers enrolled in an introductory
instructional technology course. The course was purposely redesigned using TPACK as
the course design framework. The majority of the participants were freshman (50.8%)
female (93.5%) elementary education majors (79%) who had not completed their student
teaching (85%) (Schmidt et al., 2009b). To measure the content areas taught by
elementary teachers, the construct CK was separated into four sub-scales intended to
measure content knowledge in mathematics, social studies, science, and literacy. Internal
consistency for the survey's 10 sub-scales ranged from 0.75 to 0.92. To establish
construct validity, the researchers performed two principal components analyses (PCA).
After the first PCA, 28 items were eliminated from the survey. Individual PCAs were run
10
on each of the seven TPACK constructs, and from each PCA a single factor structure
emerged except for CK which had a four factor structure related to the four content areas
within the sub-scale. Schmidt et al. (2009b) concluded that the results indicate an
appropriate and reliable instrument for measuring TPACK in preservice teachers.
Graham et al. (2009) developed and tested a TPACK survey to measure the
technology constructs TK, TPK, TCK and TPACK. The purpose of their study was to
identify and measure TPACK in science instruction and to measure the change in TPACK
confidence in participants. Content validity was established by basing the items on
definitions and descriptions from the literature. Participants were 11 elementary teachers
and four secondary teachers with teaching experience that ranged from 1 to 26 years. All
the participants chose to participate in an intensive, eight day professional development
program designed to improve their science teaching. The program also included an
additional phase outside of the eight days in which teachers reflected on lessons
implemented in the teachers’ classroom. These teachers responded to the 31-item survey
both before and after the program. Cronbach's alpha ranged from 0.91 to 0.95 for each of
the sub-scales. Construct validity was not analyzed due to the small sample. Posttest
surveys showed a significant increase for all measured TPACK constructs over the pretest
survey. Effect sizes for these differences ranged from 0.55 to 0.85. Teachers pre- and
posttest TK level was the highest followed by TPK, TPACK and TCK. TK is
foundational to developing confidence in other technology TPACK domains. Low TCK
scores may indicate confidence in using technologies designed to teach science rather
than confidence in using technologies designed to do science. Open-ended responses
suggest that teachers integrate technology using general teaching strategies rather than
11
content-specific teaching strategies. The teachers' pedagogical use of technology was also
higher than their students' pedagogical use of technology.
Rubrics. Harris et al. (2010) developed and validated a performance-based lesson
plan rubric to evaluate TPACK. The researchers first developed a draft of the lesson plan
rubric based on the TPACK theoretical framework. A draft of the rubric was sent to six
TPACK researchers who gave feedback on the construct and face validity of the rubric.
The developers then revised the rubric according to the feedback. Two groups of teachers
(15 total teachers) who had technology integration experience were each asked to assess
three technology-infused lesson plans created by preservice teachers. The 15 experienced
teachers participated in a six-hour training to learn to use the rubric (Harris et al., 2010).
The first group rated the lesson plans and provided feedback on the rubric, then the
developers revised the rubric again after analyzing interrater reliability and internal
consistency. Using the revised rubric, the second group of teachers rated the lesson plans
and provided feedback. One month later, the teachers were asked to rescore the same
three lesson plans in order to calculate the test and retest reliability of the rubric. The
researchers found that the rubric had adequate reliability and validity and could be used
by other researchers. Because the rubric was tested with preservice teachers' lesson plans,
it may not be appropriate to evaluate experienced teachers' lesson plans. For rubric
effectiveness, the planning documents (e.g., lesson plans) being evaluated need sufficient
detail. Because teachers often do not write plans with enough detail, an interview
protocol could be created to gather more data (Harris et al., 2010).
While the rubric developed by Harris et al. (2010) did not explicitly measure each
of the seven TPACK constructs, one developed by Akcaoglu et al. (2011) does measure
12
all seven TPACK constructs. Each of the constructs measured by the rubric includes two
to three items rated on a five-point scale. The rubric grew out of a project in which the
researchers were attempting to develop a coding scheme for lesson plans. The purpose for
developing the rubric was to enable teacher educators to measure TPACK without the
limitations of a self-report measure. Initially, the coding scheme was tested on two groups
of preservice teachers enrolled in an educational technology course. The researchers
compared results from rubric scores to scores from a self-report measure to validate the
rubric. The rubric was further refined when the researchers evaluated publicly available
lesson plans from the internet. From these lesson plan evaluations, the researchers found
that interrater reliability between two raters was 0.88 (Akcaoglu et al., 2011).
Kramarski and Michalsky (2010) used both a rubric and a self-report survey to
measure TPACK comprehension and design skills, and they self-regulated learning in two
instructional treatments. The course included instructor-led discussions and summaries
and student pairs practicing in a hypermedia environment. The content of the course
focused on TPACK learning methods and implementations and pedagogical uses of
computer tools. Ninety-five preservice secondary science teachers in Israel enrolled in a
teacher education course on hypermedia design. Using an experimental design, the
preservice teachers were randomly assigned to one of the two treatments. The two
treatment groups were not statistically different in regard to demographics or any study
variables. In one condition, students worked in pairs in a hypermedia environment. In the
other condition, the same hypermedia environment was enhanced with metacognitive
scaffolds (Kramarski & Michalsky, 2010). In the hypermedia with metacognitive
treatment, the participants were exposed to self-questioning pop-ups within the
13
hypermedia environment that addressed comprehension, connection, strategy, and
reflection. Both the hypermedia condition and the hypermedia with metacognitive
condition had 14 workshops lasting four hours each week supervised by two different
teachers. Two measures of TPACK (comprehension skills and design skills) and two
measures of self-regulated learning (aptitude and event) were administered at the
beginning of the course and at the end of the course. The hypermedia with metacognitive
group outperformed the hypermedia group on both measures of TPACK (Kramarski &
Michalsky, 2010). The hypermedia with metacognitive group reported higher cognition,
metacognition, and motivation than the hypermedia group and demonstrated the same
characteristics on the self-regulated learning measures. There was a significant
correlation between the two TPACK measures and the two self-regulated learning
measures among all participants and within each treatment group. Higher correlations
existed in the hypermedia with metacognitive group than in the hypermedia group. The
results verified the hypotheses in that a hypermedia environment with metacognitive
support is more effective in developing TPACK and fostering self-regulated learning than
a hypermedia environment alone (Kramarski & Michalsky, 2010).
Qualitative methods. The purpose of the Groth et al. (2009) study was to
investigate a method to assess TPACK in math teachers by evaluating the qualitative data
gathered from lesson study cycles. Researchers used an accounts of practice method
where researchers study classroom practice through the lens of a conceptual framework.
The researchers applied the accounts of practice method during a lesson study
professional development project. A lesson study involves a group of teachers, which
create a lesson collaboratively. They then implement it, observe the implementation, and
14
then gather to debrief. The researchers observed and gathered data from a group of math
teachers teaching systems of equations using graphing calculators within a lesson study
framework. Data sources included written lesson plans from the teachers, a faculty
member's reviews of the lessons, transcripts and videos of implemented lessons, and the
recordings and transcripts of debriefing sessions about the implemented lessons (Groth et
al., 2009). After the data were gathered, the researchers created a case study database and
then inferences about teachers’ TPACK were drawn from the data. Inferences were
validated and refined among the researchers. Inferences included: (a) the need for
teachers to develop knowledge on how to compare multiple representation and solution
strategies with the graphing calculator; (b) teachers needed to avoid representing the
calculator as a black box and; (c) teachers needed to develop and present problems that
reveal the calculator's limitations (Groth et al., 2009). The model the researchers used to
gather and evaluate the data within the TPACK framework for this study does not provide
a way to measure individual teacher TPACK; however, the ability to capture reasoning of
the group is a strength of the model. The exploratory potential of the model is also a
strength because researchers can generate plausible ideas for psychometric assessment
items (Groth et al., 2009). The model captures the fluid, contextually situated, collective
development of teacher knowledge. The model is also flexible and can be used in many
different settings. Finally, the model draws upon the expertise of a variety of people
(Groth et al., 2009).
Employing a case study design, Brantley-Dias et al. (2007) explored how using
case-based instructional strategies promotes Pedagogical Technology Integration Content
Knowledge (PTICK) development. PTICK is the researchers' particular variation of
15
TPACK. Participants in the study were enrolled in an alternative teaching licensure
program. Nineteen participants were part of a science education cohort and 14 were part
of an English education cohort. All participants had content area degrees, but only four
had provisional teaching certificates and one year of teaching experience. All participants
were enrolled in the Technology for Educators course. The Technology for Educators
course was a problem-centered, activity-based course. It addressed the National
Educational Technology Standards for Teachers (NETS-T) and the Interstate New
Teacher Assessment and Support Consortium (INTASC) standards (Brantley-Dias et al.,
2007). Three of the researchers were either instructors for the course or the course
designer. The participants analyzed cases appropriate to their content area from course
materials. Students answered questions about the cases and provided a group report.
Individual reflections on the cases were submitted. Each participant analyzed a total of
four cases and submitted responses and reflections. Participants also submitted course
reflections at the beginning, middle, and end of the course (three total reflections).
Researchers used content analysis to categorize concepts while ideas and pattern
matching within and across cases were used to answer research questions (Brantley-Dias
et al., 2007). Researchers found that students felt more prepared to integrate technology
as the course progressed and demonstrated increasing technology integration conceptual
knowledge. Students also made connections between the technology course, their content
courses and their pedagogy courses. Researchers observed that case studies and group
discussions allowed reflection on how students would handle the situation in the case
study. Case studies provide preservice teachers an opportunity to reflect and discuss
planning instruction even without previous teaching experience and allow students to
16
recognize value in communities of practice. The researchers concluded that case-based
instruction promotes PTICK development (Brantley-Dias et al., 2007).
TPACK Criticism
Since Mishra and Koehler's (2006) seminal article that described the TPACK
framework, the theory has enjoyed widespread acceptance in educational research and, to
a lesser extent, in teacher education (Cox, 2008). There are some researchers and
educators who have expressed concerns with the theoretical development of TPACK as
the focus of TPACK research has been mostly concerned with the description of the
framework (Graham, 2011).
Despite considerable work on the descriptive value of TPACK, the framework
and its technology-related constituent parts do not have widely accepted and precise
definitions (Graham, 2011). In a conceptual analysis of TPACK, Cox (2008) found 89
definitions of TPACK in the literature as well as 13 definitions of TCK and 10 definitions
of TPK. The lack of clarity in the TPACK framework may partially lie in PCK, its
foundational theory, which was developed by Shulman (1986). Though PCK has
produced much useful research, the complexity of the concept does not lend itself to clear
and discriminant definitions that are easily researched (Baxter & Lederman, 1999; GessNewsome, 1999).
The continued popularity of both PCK and TPACK may be due to the intuitive
nature of the components (Graham, 2011). The constituent parts of pedagogical
knowledge and content knowledge in PCK and the addition of technological knowledge
for TPACK are instinctual and evident to many educators who recognize the importance
of the interplay among these knowledge areas (Graham, 2011). While the individual
17
components of the two frameworks are clear and discrete, it is where the components
overlap to create complex new concepts (TPK, PCK, TCK, TPACK) that the issues arise.
In an exploratory factor analysis of an early TPACK instrument, Archambault and
Barnett (2010) found that they were unable to extract all seven TPACK constructs from
the data resulting from 596 responses of online educators. The researchers extracted three
factors: pedagogical content knowledge, technological-curricular content knowledge, and
technological knowledge (Archambault & Barnett, 2010). This result suggests that either
the seven TPACK components are not defined discretely enough to be measured, or that
none of the seven the components exist in practice.
TPACK does have descriptive value, but it is a complex framework that currently
lacks theoretical clarity and precise construct definitions. TPACK research and theory
development is in its relative infancy as it began in earnest following Mishra and
Koehler's (2006) seminal article. TPACK's faults do not call for abandonment of the
framework, but rather more diligent research that may lead to a clear and precise theory.
Graduate Teacher Education
The majority of the current methods to measure TPACK are developed for use
within specific contexts with specific teacher types. For example, Archambault and
Crippen (2009) created their survey to examine TPACK among K-12 online educators
while Schmidt et al. (2009a) created their survey to measure TPACK in undergraduate
preservice teachers. The fact that these two TPACK questionnaires were designed for use
with such specific populations highlights the importance of studying both teacher and
teacher education subgroups. Undergraduate preservice teachers and in-service teachers
as participants are featured in the bulk of studies on technology in teacher education.
18
Comparatively few studies focus on graduate education students. For the purposes of this
study, graduate teacher education students are defined as those who are pursuing an
advanced teaching degree either while currently teaching in a K-12 environment or with
the intention to pursue a career in teaching at that level. In a search of the Education
Resources Information Center, only 13 results emerged from a search for the terms
graduate teacher education and technology. Similarly, a search of the Association for the
Advancement of Computing in Education's Education and Information Technology
Digital Library returned a list of 10 articles with the search term graduate teacher
education. Specific to TPACK, two articles investigated TPACK in a graduate education
context (Hofer & Grandgenett, 2012; Machado, Laverick, & Smith, 2011).
Although graduate students may be underrepresented in educational technology
research, they represent a significant population within teacher education. Statistics
reveal that 49.5% of all teachers have some postbaccalaureate work, while 42.8% have
earned a master's degree (Aud et al., 2011). Because many states now require their
teachers to earn a graduate degree to reach the highest level of licensure, the number of
teachers seeking a graduate education is only likely to increase (United States
Department of Education, 2011). Graduate students in education are therefore a
significant and important subgroup to investigate.
Overview of Present Study
The purpose of this study was to investigate graduate teacher education students'
knowledge and practice of teaching with technology as well as how that knowledge and
practice changes after participation in an educational technology course. This study used
a mixed-methods sequential explanatory research design that required two phases. First,
19
quantitative data was collected and analyzed. Then qualitative data was collected. The
datasets from the two phases were connected, integrated, and interpreted to answer the
questions of the study (Ivankova, Creswell, & Stick, 2006). A mixed-methods study
requires an overarching mixed research question along with subquestions related to the
different phases of the study (Tashakkori & Creswell, 2007). The overarching mixed
question for this study was: what is the shape and malleability of TPACK among graduate
teacher education students enrolled in an educational technology course? Answering this
question required measurement of the preliminary shape of TPACK, analysis of how that
shape changed at the end of the semester, and inquiry into TPACK influences outside of
the course. Malleability in this study is defined as a property of a measured characteristic
and that characteristic’s susceptibility to change or fluctuation over time (Keenan &
Evans, 2009). In contrast, stability of a measured characteristic is typified by its
consistency over time.(Caspi, Roberts, & Shiner, 2005). This study also aimed to answer
the following subquestions:
1.
What is the level of TPACK, reported and demonstrated, among graduate teacher
education students?
2.
Do TPACK levels, reported and demonstrated, change after participation in an
educational technology course?
3.
Are self-reported levels of TPACK evident in artifacts of teacher practice?
4.
How does the language differ between artifacts developed before the course and
artifacts developed at the end of the course?
5.
How do students with higher levels of TPACK differ from those with lower levels
of TPACK?
20
6.
What teaching and learning experiences influenced students' knowledge of and
practice in teaching?
Mixed methods. Two definitive characteristics of mixed-methods studies are the
collection and analysis of both qualitative and quantitative data and the integration of the
two data types to more comprehensively answer research questions (Creswell & Plano
Clark, 2011). There are many reasons for gathering the two types of data. The multiple
data sets provide an opportunity for triangulation among the data for corroboration and
provide additional explanatory possibilities as one data set can help explain the result of
the other. The addition of qualitative data also provides context to quantitative data, and
the combination offers a more comprehensive view of the studied phenomenon (Creswell
& Plano Clark, 2011). Mixed-methods researchers have documented over 40 mixed
methods designs and the sequential explanatory design was a popular design that has
been used in social and behavioral science research (Ivankova et al., 2006). Limitations
of the design include the lengthy amount of time required and the feasibility of collecting
both types of data (Ivankova et al., 2006)
Priority of the quantitative or qualitative phase depends on the researcher's
interests, the study's audience, and the emphasis of the study. However, the quantitative
phase is generally given priority in sequential explanatory designs. Integration is the
stage in the research process where the mixing of methods occurs. This can include
developing both quantitative and qualitative research questions at the outset of the study
and integrating the results of the two datasets and interpreting them together at the end of
the study. In sequential explanatory designs, the two datasets are also connected. The
results of one phase inform the data collection in the following phase. This connection
21
can manifest in the selection of participants and the development of data collection
protocols for the qualitative phase based on the results from the quantitative phase.
Finally, the results are presented jointly in the discussion section by first answering the
quantitative and qualitative questions, then using the qualitative data to explain the
quantitative results (Ivankova et al., 2006).
This study collected quantitative data from a survey and from lesson plans and
qualitative data through interviews. Priority was given to the quantitative phase because
that phase was designed to answer the majority of the research questions. The qualitative
and quantitative data were connected at two points in the study. Using the connecting
strategy as described by Creswell and Plano Clark (2011), quantitative results determined
the strategies used to collect qualitative data, including how participants were selected
and how the interview protocols were developed. The second point of interface was
during interpretation. A mixed-method design was appropriate for this study because
eliminating either the quantitative data or the qualitative would result in an incomplete
view of TPACK among graduate students.
Previous research suggested that measuring TPACK using a self-report measure
alone may be inadequate (Archambault & Barnett, 2010). Beyond the complexity of
measuring a construct like TPACK is the inherent limitations of self-report measures.
Self-report instruments ask participants questions that measure knowledge, attitudes, and
practices and rely on answers based on the participants’ perception of the truth (Schwarz
& Sudman, 1994). In addition to issues of perception, cognitive psychologists warn of the
fallibility of human memory (Schacter, 1999). Further, Cook and Campbell (1979)
reported that participants tend to report what they think researchers want to see, or they
22
respond in ways that reflect positively on their abilities and knowledge. Due to the
complexities of measuring TPACK and the limitations of self-report measures, a mixedmethods design featuring multiple strategies and analyses to minimize potential error and
maximize the meaning of data was chosen (Tashakkori and Teddlie, 2003).
Philosophical assumptions. The philosophy most associated with mixedmethods research is pragmatism. Pragmatism is a philosophy that “draws on many ideas,
including employing 'what works,' using diverse approaches, and valuing both objective
and subjective knowledge” (Creswell & Plano Clark, 2011, p. 43). According to
Tashakkori and Teddlie (2003), no fewer than 13 researchers have selected pragmatism as
the best philosophy for mixed-methods research. The pragmatic researcher is primarily
focused on his research questions rather than specific methods. He therefore selects the
methods deemed appropriate for answering the research questions whether or not they are
traditionally aligned with competing philosophical views like postpositivism or
constructivism (Creswell & Plano Clark, 2011).
As Creswell and Plano Clark (2011) suggested, it is preferable to select
philosophical assumptions based the phases of research design. The quantitative phase of
the study was based on an empirical measure of the specific constructs of TPACK using a
survey and rubric instrument; because of the basis, the study is set in a postpositivist
philosophy. The philosophical assumptions shift as one enters into the qualitative phase.
In interviewing teachers, the researcher gained individual perspectives and practices
related to TPACK. Honoring individual participant responses to gain deeper
understanding and explanatory power is more in line with a constructivist philosophy
(Creswell & Plano Clark, 2011). Rather than confining the work with one view of the
23
world, the researcher takes advantage of two philosophies—postpositivism in the
quantitative phase, then constructivism in the qualitative phase—that enabled the
research questions to be answered more thoroughly.
24
Chapter 2
METHOD
Participants
The study participants were graduate education students (n = 82) enrolled in
education courses at a large university in the southwest who were working toward their
education graduate degrees during the Fall 2011 and Spring 2013 semesters. One
participant group was enrolled in an educational technology course (the treatment group)
and the other participant group was enrolled in a learning theory course (the
nonequivalent control group). The majority of the graduate students enrolled in these
courses were current or former kindergarten through twelfth grade (K-12) teachers.
The treatment group participants in this study included 57 master’s-level graduate
education students enrolled in an educational technology integration course at a large
southwestern university who were working toward their graduate degrees in educational
technology or accomplished teaching. Most (68%) were enrolled in the course because it
was a required course in their program. Only 23% of participants stated that they were
enrolled in the course because the topic of the course was of personal interest to them.
The majority of the participants were female (n = 45) and there were 12 male
participants. More than half were between the ages of 21 and 29 (58%), while 23% were
between 30 and 39 years old. The remaining 19% of participants were over 40 years old.
The majority of the participants were current or former teachers (83%) with an
average of 5.3 years of experience. Of those who were current or former teachers, 34%
taught in elementary schools, 36% taught in middle schools, and 15% taught in high
schools. There were also participants who taught in higher education contexts (15%).
25
Most participants described their knowledge of educational technology as intermediate
(68%), while 21% responded that they were beginners, and 11% responded that they
were advanced. No participants responded that their knowledge of educational
technology was at the expert level.
When participants responded to statements that described their current level of
classroom technology use on the presurvey, 33% responded that technology served as a
supplement to their existing curriculum, 22% responded that they had a lack of access to
technology or a lack of time to pursue implementation of technology, and 18% responded
that their use of technology occurred outside of their classroom in a lab environment. In
addition, 16% responded that they integrated technology to enrich understanding of
concepts, themes and processes, and to solve authentic problems.
The control group in this study was made up of 26 graduate student participants
enrolled in a learning theories course at a large southwestern university. All but one of
the students were pursuing education-related graduate degrees with the majority (85%)
pursuing a master’s degree. Most of the control group participants were female (n = 16)
while there were 10 male participants. Less than half were between the ages of 21 and 29
(42%), while 23% were between 30 and 39 years old. The remaining 35% of participants
were over 40 years old.
Over half (58%) of the participants were current or former teachers with an
average of 4.6 years of teaching experience. Of those who were current or former
teachers, 23% taught in elementary schools, 4% taught in middle schools, and 8% taught
in high schools. There were also participants who taught in higher education contexts
(19%) and one participant who worked in corporate training. Most participants described
26
their educational technology knowledge level as either beginner (42%) or intermediate
(39%). The remainder described their level of knowledge as advanced (19%). No
participants responded that their knowledge of educational technology was at the expert
level.
Power Analysis
Schmidt et al. (2009b) developed a TPACK self-report instrument for use with
preservice teachers. They tested the instrument with the students in an introductory
instructional technology course. The researchers administered the survey to the students
during the first week of the semester and again during the final week of the semester. The
researchers reported means and standard deviations for the pre- and posttest
administrations, which could be used to calculate effect size statistic Cohen's d. Among
the instrument's 10 subscales, d ranged from 0.33 to 1.44 with an average of 0.68. While
this study was not a replication of Schmidt et al. (2009b), their effect sizes provide a
frame of reference for the current study's power analysis. Using the power analysis
software G*Power 3.1 (Faul, Erdfelder, Lang, & Buchner, 2007) to determine an
appropriate sample size for the study, specifically research question 2, Table 1 was
created. The power calculations were based on the dependent samples t-test statistic.
Cohen (1988) determined that a d value of 0.20 was a small effect, 0.50 a medium effect
and 0.80 a large effect. With a conservative estimate based on the Schmidt et al. (2009b)
smallest effect size of 0.33 and power at 0.80, the estimated sample size required was 59.
With a medium effect size of 0.50, which is less than the mean effect size from the
reference study, the estimated sample size required was 27.
27
Table 1
Sample Size Estimate Results from a Power Analysis for Dependent Samples
Power
Population d
0.20
0.50
0.80
0.70
120
21
9
0.80
156
27
12
216
36
15
0.90
Note. Alpha = 0.05
Educational Technology Course
The treatment group’s course is a 15-week, 500-level, online educational
technology course that focuses on effective methods for integrating digital technology
into teaching to assist learning. The course investigates uses of digital technology in
classrooms, creating learning environments rich with technology, and implementing
instructional design principles in the design and development of technology-based
materials for learning. Coverage of topics includes learning theory, instructional software,
productivity software, hyper and multimedia, assistive technology, and technology
integration in various content areas. The course follows a mastery learning approach
where learning units are organized by week and are followed by formative assessment,
individualized feedback, and additional opportunities to meet mastery assessment levels,
if necessary (Bloom, 1968). The course objectives are based on some of the National
Educational Technology Standards for Teachers (NETS-T) developed by the International
Society for Technology in Education (ISTE). Specifically they are:
III. Teaching, Learning and the Curriculum
Teachers implement curriculum plans that include methods and strategies for
28
applying technology to maximize student learning. Teachers:
A. facilitate technology-enhanced experiences that address content standards
and student technology standards.
B. use technology to support learner-centered strategies that address the
diverse needs of students.
C. apply technology to develop students’ higher order skills and creativity.
D. manage student learning activities in a technology-enhanced environment
(International Society for Technology in Education, 2000, p. 1).
In addition to weekly readings from Integrating Educational Technology into
Teaching (Roblyer & Doering, 2009) and other resources, students engaged in a variety
of instructional activities to meet the objectives of the class. Sample assignments are
described below.
•
Educational technology rationale: Students prepared a rationale for integrating
technology in education supported by evidence from empirical research.
•
Educational software reviews: In small groups, students wrote detailed reviews of
educational software, categorized by software type, that they might use in their
classroom. The categories included drill and practice, tutorial, simulation, games,
and problem solving.
•
Productivity tool lesson plan and model: Students wrote a lesson plan that
integrated productivity software. They also designed a model that demonstrated
the outcomes of the lesson. The plan met at least one of the following criteria:
based on real-world problems; scaffolds and tools enhanced learning; provided
opportunities for feedback, reflection, and revisions; and build local and global
29
learning communities.
•
Google spreadsheet activity: Students designed an instructional activity that
facilitated inquiry, problem solving, or decision making in the student’s content
area.
•
Assistive technology: Students identified and described an instructional
technology that failed to meet accessibility standards. Students addressed the
impact that the inaccessible technology could have in the classroom and provided
suggestions for how to remedy the issues with the technology.
•
Choose your own adventure (COYA) website: Students designed and developed
an instructional website using the COYA framework. For example, a COYA site
could be based on a story of scientific discovery or a sequence of historical or
future events that requires users to make decisions based on the content of the
website.
Learning Theory Course
The control group was enrolled in a 15-week, 500-level course that focused on
psychology’s historical view of learning over the preceding century. The course is largely
lecture and discussion based, relying on students to complete regular readings from
“Psychology of Learning for Instruction” (Driscoll, 2004), “The Structure of Scientific
Revolutions” (Kuhn, 1970), and other selected articles. The topics of the course included
the following.
•
Philosophical foundations
•
Science, paradigms, and foundations of learning theory
•
Behaviorism and alternate approaches
30
•
Verbal learning and information processing
•
The developmental perspective
•
The role of emotion, attention, and pattern recognition
•
Network and schema representations
•
The neurobiological perspective
Assessment is based on two essay-based exams and three projects. The projects
include a written summary and oral presentation of research on learning theory, a
summary of a “Learning and the Brain” conference session, and participation in a book
study and discussion group based on one of three books.
Instruments and Data Sets
Questionnaire. Archambault and Crippen's (2009) questionnaire was designed to
measure TPACK in K-12 online educators. The questionnaire includes 24 items with
seven subscales each related to the seven Technological Pedagogical Content Knowledge
(TPACK) constructs. The researchers conducted two rounds of think-aloud pilots where
teacher participants were asked to explain what they were thinking while they answered
the survey questions. These pilots allowed the researchers to demonstrate construct
validity of the questionnaire. Experts in educational technology and online education
were asked to review the questionnaire to establish content validity. The Cronbach alpha
coefficient ranged from 0.70 to 0.89 for each subscale in the questionnaire (Archambault
& Crippen, 2009). The root question for each of the 24 items is, “How would you rate
your own ability in doing the following tasks associated with teaching in a distance
education setting?” (Archambault & Crippen, 2009, p. 88). Response options range from
Poor to Excellent on a 5-point, Likert-type scale. Each of the seven subscales includes
31
three to four items designed to measure TPACK constructs: content (e.g., “My ability to
create materials that map to specific district/state standards”), pedagogy (e.g., “My ability
to determine a particular strategy best suited to teach a specific concept”), technology
(e.g., “My ability to assist students with troubleshooting technical problems with their
personal computers”), technological content (e.g., “My ability to implement district
curriculum in an online environment”), technological pedagogy (e.g., “My ability to
create an online environment which allows students to build new knowledge and skills”),
content pedagogy (e.g., “My ability to distinguish between correct and incorrect problem
solving attempts by students”), and technological pedagogical content knowledge (e.g.,
“My ability to create an online environment which allows students to build new
knowledge and skills”) (Archambault & Crippen, 2009, p. 88). The survey requests that
participants rate their ability on each item so that their responses serve as a proxy for
participant practice. Research suggests that teacher knowledge and teacher practice are
closely related (Cochran-Smith & Lytle, 1999). In other words, teacher knowledge is
evident in teacher practice. This close relationship allows researchers to measure practice
with a theory based on knowledge (Dawson, Ritzhaupt, Liu, Rodriguez & Frey, in press).
Prior to administration to the two groups of participants in this study, eight items
in the questionnaire were minimally modified to address the teaching experience among
the participants. The minor modifications would eliminate online teaching language and
replace it with language appropriate to face-to-face teaching. For example, item T was
changed from “My ability to implement district curriculum in an online environment” to
“My ability to implement district curriculum in a technology-rich environment” and item
X was changed from “My ability to meet the overall demands of online teaching” to “My
32
ability to meet the overall demands of teaching effectively in the 21st century”
(Archambault & Crippen, 2009, p. 88). The survey also included demographic items that
requested information like gender, age, grade level taught, years of teaching experience,
and content area taught.
Rubric. The TPACK lesson plan rubric emerged from a project by Kereluik, et al.
(2010) in which they developed a coding scheme for lesson plans. The purpose for
creating the coding scheme was to provide teacher educators a tool to assess preservice
teachers’ TPACK through artifacts rather than self-report measures. Preliminary coding
schemes were developed by examining lesson plans from a summer preservice
educational technology course while the final coding scheme was refined by examining
lesson plans collected from 11 preservice teachers in a different educational technology
course. To validate the coding scheme, the researchers compared the results from the
coding with results from the TPACK self-report measure developed by Schmidt et al.
(2009a) (Kereluik, Casperson, & Akcaoglu, 2010). From this coding scheme, a rubric
was developed using “theory-driven thematic coding” (Akcaoglu, Kereluik, Casperson,
2011, p. 4261) and construct analysis of both the theory and a TPACK self-report
instrument developed by Schmidt et al. (2009a). The initial rubric included measures of
TK, TPK, TCK, PK, PCK and TPACK. The original rubric was refined in a second
project when the researchers assessed STEM lesson plans made available online by
companies like Hewlett Packard, Intel, and Microsoft. To establish interrater reliability,
12 lesson plans were randomly chosen and rated by two researchers. Cronbach's alpha for
interrater reliability was 0.88 while the intraclass correlation coefficient was 0.74 for a
single researcher and 0.85 for the average of the two researchers. The remaining lesson
33
plans were coded individually. Redundant items were removed from the rubric and an
item to assess lesson plan objectives was added (Akcaoglu et al., 2011). Each of the
seven TPACK constructs includes between two and three items that were rated on a fivepoint scale. Some items were edited for clarity prior to scoring lesson plan for this study.
Representative items include:
(CK) Provides clear lesson objectives; (PK) Assessments are aligned with the
objectives and instructional strategies; (TK) Provides rationale for technology
choice; (PCK) Presents appropriate strategies for developing understanding of the
subject content; (TPK) Chooses technologies enhancing student learning; (TCK)
Link between technology and content is obvious or explicit; (TPCK) Technology
enhances content objectives and instructional strategies. (Akcaoglu et al., 2011, p.
4262)
Lesson plans. Two lesson plans were requested from each participant. The first
lesson plan was one that the participant prepared prior to enrolling in the course that
included a technology component. The second lesson plan was the product of the final
assignment for the educational technology course. For this final assignment, participants
were asked to develop, implement, and evaluate an original lesson that demonstrated
effective teaching with technology. The lesson included the following sections:
Introduction, Rationale, Activity Description, Lesson Evaluation, and Conclusions. The
Rationale section included the participant’s theoretical perspective, the target population,
technology standards, content standards, and the instructional goals of the lesson. The
Activity description included the instructional objectives and a description of the activity
and procedures. Participants were required to address cultural connections and
34
considerations for learners with special needs.
Interviews. Select participants were interviewed based on their responses to the
precourse survey—a key process in the sequential explanatory design. Interview
questions included: 1) Can you tell me what a typical lesson looks like in your
classroom? 2) What are some of the most effective teaching methods that you use? 3)
What were some of the influences or resources that helped you gain your knowledge in
your specific content/subject area? 4) What are your opinions about the use of technology
in teaching? 5) Describe an early instance where you saw an effective use of technology
in teaching. 6) Describe the first time you taught with technology. 7) What has influenced
the use of technology in your teaching? The full interview script is in included in
Appendix C.
Procedure
Quantitative phase. Both groups of participants were asked to respond to the
modified TPACK survey. The treatment group responded to the pre- and postsurvey
during the Fall 2011 semester. The control group responded to the pre- and postsurvey
during the Spring 2013 semester. The groups accessed the link to the survey from their
course learning management system or from an email sent by the researcher. The survey
was delivered and responses collected by an online survey service. The students
completed the survey in the first week of the courses so their responses were not
influenced by the content of the course. The participants progressed through the content
and activities of their courses over the following 15 weeks. Prior to the end of the
semester, the groups were asked to respond again to the same survey.
The treatment group was also asked to provide two lesson plans. Although these
35
artifacts could be considered qualitative data, they were analyzed quantitatively by
assigning rubric scores and using text analysis software. The first lesson plan that
participants provided was one that they developed before the beginning of their
technology integration course and that features the use of technology. They were
instructed to upload a lesson plan to their course management system at the beginning of
the Fall 2011 semester. As a project required for the course, the participants developed an
original lesson plan that featured the use of technology and demonstrated their
understanding of educational theory and met appropriate standards from the National
Educational Technology Standards. This lesson plan was uploaded to the course
management system at the end of the Fall 2011 semester.
Lesson plans were scored using the TPACK rubric by external raters who had
experience and expertise in both teaching and educational technology. External raters
were used to eliminate potential researcher bias. The raters were not informed about the
design nor whether the lesson plans they rated were created before or after the course.
The raters were each given a rater guide which included directions on how to score the
lesson plans, a TPACK primer, and construct definitions. Lesson plans from participants
who submitted both a pre- and postcourse lesson plan were scored by two raters. One pair
of raters rated the precourse lesson plans and one pair of raters rated the postcourse
lesson plans. The intraclass correlation coefficient for the precourse lesson plan raters
was 0.79. The coefficient for the postcourse lesson plan raters was 0.82. These
coefficients suggest acceptable interrater reliability (Shrout & Fleiss, 1979). The rater
scores were then averaged for each rubric item for analysis. With coefficients that
suggested acceptable interrater reliability, a single rater scored the lesson plans from
36
participants who submitted either a precourse or postcourse lesson plan.
Qualitative phase. Sequential, mixed-methods study designs call for engaging in
what Creswell and Plano Clark (2011) described as connected mixed-methods data
analysis. In this study, the qualitative data was first connected to the quantitative data
through the purposeful interview sample based on presurvey responses and the TPACKrelated interview questions. Creation of a joint display, a table which arrays both
quantitative and qualitative data so that the two data sources can be directly compared,
was the second connection (Creswell & Plano Clark, 2011, p. 412).
In the qualitative phase, 11 participants were selected based on their responses to
the precourse survey. Selected participants’ precourse TPACK scores ranged from high to
low and these participants were arranged into a high scoring group, a mid scoring group
and a low scoring group. The grouping allowed investigations into the differences among
high, middle, and low scoring participants. Upon selection, participants were asked to
participate in phone interviews, which were conducted during the Fall 2012 semester.
37
Chapter 3
RESULTS
Quantitative survey and rubric data involving Technological Pedagogical Content
Knowledge (TPACK) were analyzed using both descriptive and inferential statistics with
statistical software package SPSS 20. Text analysis for the lesson plan artifacts was
processed by the LIWClite7 (Pennebaker, Booth, & Francis, 2007) and the KH Coder 2
(Higuchi, 2012) software programs. Qualitative interview and open-ended written
response data were analyzed using the qualitative analysis software package
HyperRESEARCH 3.5. See Table 2 for a summary of the research questions and analytic
approaches.
Table 2
Research Questions and Analytic Approaches
Research Question
1. What is the level of TPACK,
reported and demonstrated, among
graduate teacher education students?
2. Do TPACK levels, reported and
demonstrated, change after
participation in an educational
technology course?
Data Set
Analysis
TPACK survey scores Descriptive
TPACK rubric scores
TPACK survey scores Dependent-samples t
TPACK rubric scores test
One-way analysis of
variance
Regression
3. Are self-reported levels of TPACK TPACK rubric scores Correlations
evident in artifacts of teacher
Bivariate regression
practice?
4. How does the language differ
Lesson plans
Automated text analysis
between artifacts developed before
the course and artifacts developed at
the end of the course?
5. How do students with higher levels TPACK survey scores Analysis of variance
of TPACK differ from those with
Interview transcripts
Regression
lower levels of TPACK?
Inductive data analysis
38
6. What teaching and learning
experiences influenced students'
knowledge of and practice in
teaching?
Interview transcripts
Qualitative coding
Inductive data analysis
TPACK Levels
The analyses in this section were conducted to answer the question: What is the
level of TPACK, reported and demonstrated, among graduate teacher education students?
Treatment group survey. Participants (n = 57) responded to a 19-item presurvey
designed to measure their perceived TPACK ability. Internal consistency estimates of
reliability were computed for each of the survey’s seven subscales: PK (0.81), CK (0.82),
TK (0.88), PCK (0.84), TPK (0.84), TCK (0.80), and TPACK (0.69). These estimates
indicate satisfactory reliability (Kline, 2000). Mean scores were highest on items related
to producing lesson plans (3.75), using a variety of teaching strategies (3.55), and
planning the sequence of concepts taught (3.53). Mean scores were lowest on the items
related to encouraging interactivity using technology (2.41), moderating web-based
student interactivity (2.44), and using technology to predict student’ skills (2.61). See
Table 3 for the remaining mean item scores on the presurvey. TPACK construct scores
were calculated from the presurvey items and participant mean scores were highest in
PCK (3.40) and PK (3.36). Mean construct scores were lowest in TPK (2.64) and TK
(2.78). See Table 4 for the remaining mean construct scores from the presurvey.
39
Table 3
Mean TPACK Presurvey Item Scores
Item
Scale
Vary teaching strategies
PK
Adjust method based on student
PK
performance
Determine strategy for concept.
PK
Plan sequence of concepts
CK
Scope of concepts
CK
Materials map to standards
CK
Address software issues
TK
Troubleshoot hardware
TK
Assist students with technology
TK
troubleshooting
Produce lessons with topic
PCK
appreciation
Assist students in noticing concept
PCK
connections
Anticipate student topic
PCK
misconceptions
Distinguish correct problem solving
PCK
attempts
Use technological representations
TCK
Implement district curriculum with
TCK
tech
Use web-based
TCK
courseware/applications
Implement technology to support
TPK
teaching methods
Students build knowledge/skills with TPK
technology
Moderate web-based student
TPK
interactivity
Encourage student interactivity
TPK
technology
Use results of tech-based
TPCK
assessments.
Meet demands of 21st century
TPCK
40
Treatment Treatment
Control M
Control
M
SD
SD
3.55
3.44
0.74
0.77
3.05
3.05
1.16
1.19
3.27
3.53
3.47
3.25
2.98
2.71
2.66
0.93
0.91
0.88
0.89
0.90
1.02
1.01
3.05
3.30
3.33
2.70
3.30
3.10
3.10
1.05
1.08
1.20
1.17
0.98
1.00
1.17
3.75
0.84
3.11
1.10
3.43
0.82
3.10
1.02
3.31
0.81
2.85
0.93
3.27
0.78
3.05
1.10
3.09
3.00
1.00
0.90
3.40
2.50
0.94
1.19
2.69
0.84
2.90
1.02
2.79
0.99
2.55
1.00
3.00
0.83
2.81
0.98
2.44
0.88
2.65
1.23
2.41
1.01
2.40
0.99
3.13
0.88
2.75
0.91
2.89
0.85
3.00
1.11
teaching
Create effective representations with TPCK
technology
Predict students' topic
TPCK
skill/understanding with technology
Note. Treatment n = 57. Control n = 25.
2.86
0.88
2.75
1.25
2.61
0.82
2.68
1.16
Table 4
Mean TPACK Presurvey Construct Scores
Construct
Treatment M
Treatment SD
PK
3.36
0.72
TK
2.78
0.88
CK
3.37
0.75
PCK
3.40
0.67
TCK
2.89
0.78
TPK
2.64
0.75
TPACK
2.85
0.61
Note. Treatment n = 57. Control n = 25.
Control M
3.08
3.14
3.14
3.05
2.93
2.62
2.77
Control SD
1.01
0.96
0.98
0.89
0.86
0.84
0.97
Participants (n =39) responded to the same 19 items in a postsurvey. Internal
consistency estimates of reliability were computed for each of the survey’s seven
subscales: PK (.085), CK (0.87), TK (0.90), PCK (0.88), TPK (0.87), TCK (0.84), and
TPACK (0.83). These estimates indicate satisfactory reliability (Kline, 2000). Mean
scores were highest on items related to producing lesson plans (3.90), planning the
sequence of concepts taught (3.87), and deciding on the scope of concepts taught (3.87).
Mean scores were lowest on the items related to troubleshooting hardware technical
problems (2.82), encouraging interactivity using technology (3.00), and assisting students
with troubleshooting technical problems (3.05). See Table 5 for the remaining mean item
scores on the postsurvey. Construct scores were calculated from the postsurvey items and
participant mean scores were highest in CK (3.85) and PK (3.68). Mean construct scores
41
were lowest in TK (2.99) and TPK (3.23). See Table 6 for the remaining mean construct
scores on the post-survey.
Table 5
Mean TPACK Postsurvey Item Scores
PK
PK
PK
Treatme
nt M
3.85
3.62
3.59
Treatme
nt SD
0.93
0.78
0.85
Control
M
3.24
3.15
3.20
Control
SD
1.04
0.88
0.95
CK
CK
CK
TK
TK
TK
3.82
3.87
3.87
2.82
3.10
3.05
1.05
1.06
0.95
1.12
1.02
1.15
2.70
3.33
3.35
3.10
3.35
2.95
1.38
1.02
1.18
1.14
1.14
1.23
PCK
3.56
0.85
3.45
0.94
PCK
3.51
0.76
3.25
0.91
PCK
3.90
0.88
3.47
0.90
PCK
3.69
0.73
3.35
0.88
TCK
TCK
3.67
3.54
0.90
0.82
3.40
3.00
0.88
1.08
TCK
3.26
0.94
3.15
0.99
TPK
3.44
0.79
3.33
1.02
TPK
3.49
0.91
2.95
1.10
TPK
3.16
1.10
2.90
0.91
TPK
3.00
1.01
2.75
1.07
Item
Sub
Vary teaching strategies
Determine strategy for concept
Adjust method based on student
performance
Materials map to standards
Scope of concepts
Plan sequence of concepts
Troubleshoot hardware
Address software issues
Assist students with technology
troubleshooting
Distinguish correct problem
solving attempts
Anticipate student topic
misconceptions
Produce lessons with topic
appreciation
Assist students in noticing concept
connections
Use technological representations
Implement district curriculum with
tech
Use web-based
courseware/applications
Students build knowledge/skills
with technology
Implement technology to support
teaching methods
Moderate web-based student
interactivity
Encourage student interactivity
42
technology
Use results of tech-based
TPCK
assessments
Predict students' topic
TPCK
skill/understanding with
technology
Create effective representations
TPCK
with technology
Meet demands of 21st century
TPCK
teaching
Note. Treatment n = 57. Control n = 25.
3.26
0.99
2.90
1.17
3.13
0.91
2.74
0.93
3.65
0.68
3.15
0.88
3.45
0.83
3.26
0.81
Table 6
Mean TPACK Postsurvey Construct Scores
Construct
Treatment M
Treatment SD
PK
3.68
0.75
TK
2.99
1.00
CK
3.85
0.91
PCK
3.67
0.69
TCK
3.49
0.77
TPK
3.23
0.82
TPACK
3.28
0.77
Note. Treatment n = 57. Control n = 25.
Control M
3.17
3.11
3.11
3.35
3.18
2.96
3.02
Control SD
1.01
1.10
1.01
0.77
0.78
0.88
0.74
Control group survey. Participants (n = 25) responded to a 19-item presurvey
designed to measure their perceived TPACK ability. Internal consistency estimates of
reliability were computed for each of the survey’s seven subscales: PK (0.85), CK (0.79),
TK (0.91), PCK (0.89), TPK (0.79), TCK (0.74), and TPACK (0.89). These estimates
indicate satisfactory reliability (Kline, 2000). Mean scores were highest on items related
to using content-specific technological representations (3.40), planning the sequence of
concepts taught (3.35), and deciding on the scope of concepts (3.33). Mean scores were
lowest on the items related to encouraging student interactivity using Web 2.0 tools
(2.40), implementing district curriculum with technology (2.50), and implementing
43
technologies to support different teaching methods (2.55). See Table 3 for the remaining
mean item scores on the pre-survey. TPACK construct scores were calculated from the
presurvey items and participant mean scores were highest in CK (3.14) and TK (3.14).
Mean construct scores were lowest in TPK (2.62), and TPACK (2.77). See Table 4 for
the remaining mean construct scores from the presurvey.
Participants (n =21) responded to the same 19 items in a postsurvey. Internal
consistency estimates of reliability were computed for each of the survey’s seven
subscales: PK (0.87), CK (0.82), TK (0.95), PCK (0.87), TPK (0.89), TCK (0.71), and
TPACK (0.89). These estimates indicate satisfactory reliability (Kline, 2000). Mean
scores were highest on items related to producing lesson plans that demonstrate topic
appreciation, (3.47), distinguishing appropriate problem solving attempts by students
(3.45), and using content-specific technological representations (3.40). Mean scores were
lowest on the items related to creating materials that map to standards (2.70), using
technology to predict student skill/understanding (2.74), and encouraging interactivity
among students with technology (2.75). See Table 5 for the remaining mean item scores
on the postsurvey. TPACK construct scores were calculated from the postsurvey items
and participant mean scores were highest in PCK (3.35) and TCK (3.18). Mean construct
scores were lowest in TPK (2.96) and TPACK (3.02). See Table 6 for the remaining
mean construct scores from the postsurvey.
Lesson plan rubric scores. Treatment group participants (n = 35) submitted a
lesson plan that integrated technology and was developed before the course. Internal
consistency estimates of reliability were computed for each of the survey’s seven
subscales: PK (0.83), CK (0.93), TK (0.93), PCK (0.94), TPK (0.89), TCK (0.95), and
44
TPACK (0.98). These estimates indicate satisfactory reliability (Kline, 2000). Mean
scores were highest on items related to evidence of content knowledge (3.50), providing
clear objectives (3.40), and meaningful and relevant content (3.29). Mean scores were
lowest on the items related to providing rationale for technology choice (1.59), providing
rationale for delivering instruction with technology (1.71), and demonstrating
understanding of technology (2.04). See Table 7 for the remaining mean item scores from
the prelesson plan rubric. TPACK construct scores were calculated from the prelesson
plan rubric items and participant mean scores were highest in CK (3.45) and PK (3.17).
Mean construct scores were lowest in TPK (1.96) and TK (1.98). See Table 8 for the
remaining mean construct scores from the precourse rubric.
45
Table 7
Mean TPACK Prelesson Plan Rubric Item Scores
Item
Aligned assessments objectives & strategies
Lesson organizes/manages student behavior
Content is meaningful and relevant
Provides clear lesson objectives
Evidence of content knowledge
Lesson plan incorporates technology
Provides rationale for technology
Demonstrates understanding of technology
Effective/appropriate teaching strategies
Awareness of student misconceptions
Appropriate strategies for content
Method enhancing technology
Student centered technology
Rationale for technology choice to deliver
instruction
Appropriate technologies for subject
Explicit link between technology and content
Uses content, pedagogy, and technology
strategies in concert
Technology enhances content and strategies
Construct
PK
PK
PK
CK
CK
TK
TK
TK
PCK
PCK
PCK
TPK
TPK
TPK
M
3.04
3.04
3.14
3.29
3.50
2.31
1.59
2.04
3.19
2.17
3.09
2.07
2.10
1.71
SD
1.35
1.19
0.90
0.90
1.19
1.16
1.04
1.13
1.03
1.11
1.10
1.18
1.13
1.05
TCK
TCK
TPACK
2.53
2.36
2.34
1.19
1.27
1.10
TPACK
2.33
1.18
Table 8
Mean TPACK Prelesson Plan Rubric Construct Scores
Construct
PK
CK
TK
PCK
TPK
TCK
TPACK
M
3.17
3.45
1.98
2.81
1.96
2.44
2.34
SD
1.00
1.24
1.04
1.02
1.00
1.20
1.12
46
Treatment group participants (n = 41) also submitted a lesson plan as a final
project for the educational technology course. Those lesson plans were scored using the
same TPACK rubric and same method as with the precourse lesson plans. Internal
consistency estimates of reliability were computed for each of the survey’s seven
subscales: PK (0.76), CK (0.61), TK (0.91), PCK (0.87), TPK (0.61), TCK (0.84), and
TPACK (0.89). These estimates indicate questionable to satisfactory reliability (Kline,
2000). Mean scores were highest on items related to choosing student-centered
technologies (3.68), choosing content appropriate technologies (3.57), and incorporating
technology (3.65). Mean scores were lowest on items related to awareness of possible
student misconceptions (2.29), organization and procedures for managing student
behavior (2.65), and presenting appropriate learning strategies for the content (3.01). See
Table 9 for the remaining mean item scores from the postlesson plan rubric. TPACK
construct scores were calculated from the postlesson plan rubric items and participant
mean scores were highest in TK (3.49) and TCK (3.43). Mean construct scores were
lowest in PCK (2.84) and PK (2.95). See Table 10 for the remaining mean construct
scores from the postcourse rubric.
47
Table 9
Mean TPACK Postlesson Plan Rubric Item Scores
Item
Aligned assessments objectives and strategies
Lesson organizes/manages student behavior
Content is meaningful and relevant
Provides clear lesson objectives
Evidence of content knowledge
Lesson plan incorporates technology
Provides rationale for technology
Demonstrates understanding of technology
Effective/appropriate teaching strategies
Awareness of student misconceptions
Appropriate strategies for content
Method enhancing technology
Student centered technology
Rationale for technology choice to deliver
instruction
Appropriate technologies for subject
Explicit link between technology and content
Uses content, pedagogy, and technology
strategies in concert
Technology enhances content and strategies
Construct
PK
PK
PK
CK
CK
TK
TK
TK
PCK
PCK
PCK
TPK
TPK
TPK
M
2.91
2.65
3.29
2.82
3.45
3.65
3.35
3.48
3.22
2.29
3.01
3.22
3.68
3.23
SD
1.12
0.96
0.76
1.11
0.82
0.73
1.14
0.97
0.89
0.86
0.95
0.77
0.80
1.11
TCK
TCK
TPACK
3.57
3.28
3.07
0.79
1.18
1.02
TPACK
3.01
1.13
Table 10
Mean TPACK Prelesson Plan Rubric Construct Scores
Construct
PK
CK
TK
PCK
TPK
TCK
TPACK
M
2.95
3.13
3.49
2.84
3.38
3.43
3.04
SD
.79
.83
0.89
0.80
0.68
0.94
1.02
48
TPACK Change
The analyses in this section were conducted to answer the question: Do TPACK
levels, reported and demonstrated, change after participation in an educational technology
course?
Treatment group survey. Mean difference survey scores were calculated for
participants who responded to both the pre- and postcourse survey (n = 38). Mean
difference scores (presurvey minus postsurvey) were highest on items related to
implementing technology to support teaching methods (0.84), using technology to create
effective representations (-0.83), and moderating web-based student interactivity (-0.81).
Mean difference scores were lowest on items related to producing lesson plans (-0.11),
adjusting teaching based on student performance (-0.13), and troubleshooting software
issues (-0.16). Construct mean difference scores were highest in TPK (-0.68) and TCK
(-0.67). Construct mean difference scores were lowest in PK (-0.25) and PCK (-0.26).
Table 11 includes the remaining item and construct mean difference scores.
Paired-samples t-tests were conducted for each of the survey items and each of
the survey subscales to evaluate differences between pre- and postsurvey item and
subscale scores. Results indicate that postsurvey scores were significantly higher than
presurvey scores for each of the survey subscales. Results also indicated that postsurvey
scores were significantly higher than presurvey scores for 19 of the 24 survey items. The
standardized effect size index (d) ranged from small (0.35) to large (1.14) for significant
mean differences. This suggests small to large changes in TPACK constructs after
participating in an educational technology course. Table 11 displays mean differences,
standard deviations, effect sizes, and p-values for each of the survey items and constructs.
49
Table 11
Item and Construct Mean Difference from Treatment Group Pre- to Postsurvey
Construct/Item
PK
Vary teaching strategies
Determine strategy for concept
Adjust method based on student
performance
CK
Materials map to standards
Scope of concepts
Plan sequence of concepts
TK
Troubleshoot hardware
Address software issues
Assist students with technology
troubleshooting
PCK
Distinguish correct problem solving
attempts
Anticipate student topic misconceptions
Produce lessons with topic appreciation
Assist students in noticing concept
connections
TCK
Use technological representations
Implement district curriculum with
technology
Use web-based courseware/applications
TPK
Students build knowledge/skills with
technology
Implement tech to support teaching
methods
Moderate web-based student interactivity
Encourage student tech interactivity
50
Pre to
Post M
-0.25
-0.26
-0.37
-0.13
SD
d
p
0.57
0.72
0.85
0.88
0.45
0.36
0.43
0.15
0.01
0.03
0.01
0.36
-0.39
-0.45
-0.37
-0.27
-0.30
-0.18
-0.16
-0.55
0.59
0.86
0.94
0.77
0.70
0.90
0.82
0.95
0.67
0.52
0.39
0.35
0.43
0.21
0.19
0.58
0.00
0.00
0.02
0.04
0.01
0.21
0.24
0.00
-0.26
-0.32
0.58
0.90
0.44
0.35
0.01
0.04
-0.26
-0.11
-0.27
0.76
0.73
0.77
0.35
0.14
0.35
0.04
0.38
0.04
-0.67
-0.63
-0.66
0.59
0.82
0.67
1.14
0.77
0.98
0.00
0.00
0.00
-0.68
-0.68
-0.55
0.82
0.75
0.89
0.83
0.91
0.62
0.00
0.00
0.00
-0.84
1.00
0.84
0.00
-0.81
-0.70
0.97
0.97
0.84
0.73
0.00
0.00
TPACK
Use results of technology-based
assessments
Predict students' topic skill/understanding
with technology
Create effective representations with
technology
Meet demands of 21st century teaching
-0.45
-0.24
0.74
1.02
0.61
0.23
0.00
0.16
-0.51
1.15
0.45
0.01
-0.83
0.85
0.99
0.00
-0.57
0.73
0.78
0.00
Control group survey. Paired-samples t-tests were conducted for each of the
survey items and each of the survey subscales to evaluate differences between pre- and
postsurvey item and subscale scores. Participants scored significantly higher on the
postsurvey on the item related to distinguishing appropriate problem solving attempts by
students, t(19) = -2.18, p = 0.04, and on the item related to building student knowledge
and skills with technology, t(20) = -2.23, p = 0.04. Participants also scored significantly
higher on the postsurvey subscales associated with those two items: PCK, t(20) = -2.47, p
= 0.02 and TPK, t(20) = -2.17, p = 0.04. The remaining item and construct scores were
not significantly different from pre- to postsurvey. These results suggest that the control
group’s scores increased from pre- to postsurvey in PCK and one associated item as well
as in TPK and one associated item. Table 12 displays mean differences, standard
deviations, effect sizes, and p-values for each of the survey items and constructs.
Nine participants in the control group responded that they were enrolled in one or
more educational technology courses during the semester the pre- and postsurveys were
administered. Paired-samples t-tests were conducted for each of the survey items and
constructs, excluding participants who were enrolled in an educational technology course.
All tests were nonsignificant including tests for item and construct scores that were
51
significantly different in the full control group sample. The item related to distinguishing
appropriate problem solving attempts by students was not significant, t(13) = -0.90, p =
0.39, and the item related to building student knowledge and skills with technology was
also not significant, t(14) = -0.94, p = 0.36. In addition, PCK scores, t(14) = -1.10, p =
0.29, and TPK scores, t(14) = -1.44, p = 0.17 were not significantly different from pre- to
postsurvey. These results suggest that the item and construct scores did not change from
pre- to postsurvey for participants in the control group who were not enrolled in an
educational technology course (n = 15).
Table 12
Item and Construct Mean Difference from Control Group Pre- to Postsurvey
Construct/Item
PK
Vary teaching strategies
Determine strategy for concept
Adjust method based on student
performance
CK
Materials map to standards
Scope of concepts
Plan sequence of concepts
TK
Troubleshoot hardware
Address software issues
Assist students with technology
troubleshooting
PCK
Distinguish correct problem solving
attempts
Anticipate student topic misconceptions
Produce lessons with topic appreciation
Assist students in noticing concept
52
Pre to
Post M
-0.09
-0.19
-0.10
-0.15
SD
d
p
0.85
0.98
0.72
1.23
0.10
0.19
0.14
0.12
0.64
0.38
0.54
0.59
0.03
0.00
0.00
-0.05
0.03
0.00
-0.05
0.15
0.83
0.92
1.05
1.10
0.67
0.95
0.69
0.93
0.04
0.00
0.00
0.05
0.05
0.00
0.07
0.16
0.86
1.00
1.00
0.84
0.83
1.00
0.75
0.48
-0.31
-0.40
0.57
0.82
0.54
0.49
0.02
0.04
-0.40
-0.37
-0.25
0.88
0.83
0.79
0.45
0.44
0.32
0.06
0.07
0.17
connections
TCK
Use technological representations
Implement district curriculum with
technology
Use web-based courseware/applications
TPK
Students build knowledge/skills with
technology
Implement technology to support teaching
methods
Moderate web-based student interactivity
Encourage student technology interactivity
TPACK
Use results of technology-based
assessments
Predict students' topic skill/understanding
with technology
Create effective representations with
technology
Meet demands of 21st century teaching
-0.25
0.00
-0.50
0.79
0.92
1.19
0.31
0.00
0.42
0.18
1.00
0.08
-0.25
-0.35
-0.52
1.16
0.73
1.08
0.21
0.47
0.49
0.35
0.04
0.04
-0.40
1.14
0.35
0.13
-0.25
-0.35
-0.25
-0.15
1.16
0.93
0.89
1.09
0.21
0.38
0.28
0.14
0.35
0.11
0.22
0.55
-0.05
1.13
0.05
0.84
-0.40
1.14
0.35
0.13
-0.26
0.81
0.33
0.17
Welch Analysis of Variance ANOVA F-tests were conducted to evaluate
differences in TPACK presurvey construct scores between the treatment group and the
control group. The independent variable had two levels: treatment and control. The
dependent variables were the seven TPACK constructs. None of the tests were
significant. These results suggest that the scores on the seven TPACK constructs from the
treatment group did not differ significantly from the scores from the control group. Table
13 displays F-statistics, degrees of freedom, and p-values for each of the construct tests.
53
Table 13
Treatment and Control Group Presurvey Construct Score Differences
Construct
PK
CK
TK
PCK
TCK
TPK
TPACK
Welch’s F
0.67
0.71
1.61
1.84
0.61
0.00
0.02
df1, df2
1, 39.37
1, 42.62
1, 43.59
1, 40.96
1, 47.98
1, 45.39
1, 34.71
p
0.42
0.41
0.21
0.18
0.44
0.95
0.89
Lesson plan rubric scores. Mean difference rubric scores were calculated for
participants who submitted both the precourse and final project lesson plan (n = 28).
Mean difference scores (precourse score minues final project score) were highest on
items related to choosing student-centered technologies (-1.48), providing a rationale for
technology choice (-1.39), and incorporating technology in lesson plans (-1.30). Mean
difference scores were lowest on items related to including aligned objectives and
instructional strategies (0.00), demonstrating awareness of student misconceptions
(-0.11), and selecting effective teaching strategies (-0.13). TPACK construct mean
difference scores were also calculated and were highest in TK (-1.30) and TPK (1.28).
Mean difference construct scores were lowest in PCK (0.01) and PK (0.24). Table 14
includes the remaining item and construct mean difference scores.
Paired-samples t-tests were conducted for each of the rubric items and each of the
rubric subscales to evaluate differences between pre- and postcourse rubric item and
subscale scores. Results indicate that postcourse rubric scores were significantly higher
than precourse rubric scores for each of the technology-related rubric subscales (TK,
54
TPK, TCK, and TPACK). Results also indicated that postcourse rubric scores were
significantly higher than precourse rubric scores for 11 of the 18 rubric items. The
standardized effect size index (d) ranged from medium (0.50) to large (1.38) for
significant mean differences. This suggests medium to large changes in TPACK
constructs after participating in an educational technology course. Table 14 displays
mean differences, standard deviations, effect sizes, and p-values for each of the rubric
items and constructs.
Table 14
Item and Construct Mean Difference from Pre- to Postcourse Rubric
Construct/Item
PK
Aligned assessments objectives and
strategies
Lesson organizes/manages student
behavior
Content is meaningful and relevant
CK
Provides clear lesson objectives
Evidence of content knowledge
TK
Lesson plan incorporates technology
Provides rationale for technology
Demonstrates understanding of
technology
PCK
Effective/appropriate teaching strategies
Awareness of student misconceptions
Appropriate strategies for content
TPK
Method enhancing technology
Student centered technology
M (Pre Post)
0.24
0.00
SD
d
P
1.01
1.19
0.24
0.00
0.21
1.00
0.66
1.21
0.55
0.01
0.07
0.31
0.41
0.21
-1.30
-1.30
-1.39
-1.21
1.10
1.28
1.39
1.27
1.04
0.98
1.23
1.17
0.06
0.24
0.29
0.17
1.26
1.32
1.13
1.03
0.73
0.21
0.13
0.38
0.00
0.00
0.00
0.00
0.01
-0.13
-0.11
0.27
-1.28
-1.30
-1.48
1.04
1.26
1.04
1.08
1.05
1.36
1.08
0.01
0.10
0.10
0.25
1.21
0.96
1.38
0.95
0.60
0.59
0.20
0.00
0.00
0.00
55
Rationale for technology choice to
deliver instruction
TCK
Appropriate technologies for subject
Explicit link between technology and
content
TPACK
Uses content, pedagogy, and technology
strategies in concert
Technology enhances content/strategies
-1.05
1.26
0.84
0.00
-0.84
-1.04
-0.64
1.11
1.01
1.30
0.76
1.03
0.50
0.00
0.00
0.01
-0.64
-0.61
1.12
1.07
0.57
0.56
0.01
0.01
-0.68
1.23
0.55
0.01
Survey and Rubric Construct Score Correlations
The analyses in this section were conducted to answer the question, “Are selfreported levels of TPACK evident in artifacts of teacher practice?”
Construct score correlations across instruments. Correlation coefficients were
computed between the seven TPACK subscales from the survey and the seven TPACK
subscales from the rubric. The results from the presurvey and precourse rubric score
correlation analyses presented in Table 15 show that only one of the correlations (survey
PK with rubric TCK) was significant. The results from the postsurvey and postcourse
rubric score correlation analyses presented in Table 16 show that only three of the
correlations (rubric TK with survey PK, CK, and PCK) were significant. These results
suggest that the TPACK subscales from the survey instrument may not measure the same
constructs as the TPACK subscales from the rubric instrument.
56
Table 15
Presurvey with Precourse Rubric Correlations
Construct PK
TK
PK
-0.193 -.0241
TK
0.321
-0.245
CK
-0.090 -0.240
-0.224 -0.306
Rubric PCK
TPK
0.252
-0.214
TCK
0.375* -0.164
TPACK
0.319
-0.169
Note. n = 32. * p < 0.05 ** p < 0.01
CK
Survey
PCK
TPK
TCK
-0.118
0.210
0.046
-0.109
0.138
0.159
0.111
-0.254
0.149
-0.185
-0.258
0.135
0.237
0.236
-0.227
-0.022
-0.193
-0.266
0.049
0.080
0.067
-0.172
0.218
-0.041
-0.216
0.234
0.286
0.248
TPAC
K
-0.152
0.130
-0.004
-0.215
0.201
0.230
0.171
Survey
PCK
0.079
0.343*
0.113
0.110
0.145
0.267
0.174
TCK
0.192
0.327
0.132
0.063
0.143
0.303
0.256
TPK
0.090
0.162
0.003
-0.079
-0.029
0.082
0.091
TPACK
0.081
0.332
0.037
0.043
0.126
0.310
0.178
Table 16
Postsurvey with Postcourse Rubric Correlations
Construct PK
TK
PK
0.030
-0.088
TK
0.340* 0.057
CK
-0.010 0.055
Rubric PCK
0.016
-0.127
TCK
0.144
-0.158
TPK
0.218
0.071
TPACK
0.153
-0.056
Note. n = 34. * p < 0.05 ** p < 0.01
CK
0.221
0.385*
0.073
0.139
0.235
0.266
0.288
Construct score correlations within instruments. Correlation coefficients were
computed among the seven TPACK constructs on the pre- and postsurvey. The results of
the presurvey correlation analyses presented in Table 17 show that 15 out of 21
correlations were statistically significant and were greater than or equal to 0.28. The
results of the postsurvey correlation analyses presented in Table 18 show that 20 out of
57
the 21 correlations were statistically significant and were greater than or equal to 0.34.
The results suggest that the survey instrument may not adequately discriminate among
the seven TPACK constructs.
Table 17
Presurvey Construct Correlations
Construct PK
TK
CK
PK
1
TK
-0.009
1
CK
0.674** -0.050
1
PCK
0.770** 0.062
0.710**
TPK
0.190
0.522** 0.253
TCK
0.281* 0.406** 0.325*
TPACK
0.486** 0.262
0.477**
Note. n = 56. * p < 0.05 ** p < 0.01
PCK
TPK
TCK
TPACK
1
0.425** 1
0.456** 0.747** 1
0.638** 0.669** 0.827** 1
Table 18
Postsurvey Construct Correlations
Construct PK
TK
PK
1
TK
0.276
1
CK
0.881** 0.344*
PCK
0.882** 0.369*
TCK
0.649** 0.582**
TPK
0.515** 0.584**
TPACK
0.655** 0.558**
Note. n = 39. * p < .05 ** p < .01
CK
PCK
TCK
TPK
TPACK
1
0.836**
0.734**
0.600**
0.671**
1
0.705** 1
0.559** 0.827** 1
0.661** 0.790** 0.779** 1
Correlation coefficients were computed among the seven TPACK constructs on
the pre- and postcourse rubric scores. The results of the precourse rubric score correlation
analyses presented in Table 19 show that 15 out of 21 correlations were statistically
significant and were greater than or equal to 0.36. The results of the postcourse rubric
58
score correlation analyses presented in Table 20 show that 21 out of the 21 correlations
were statistically significant and were greater than or equal to 0.39. The results suggest
that the rubric instrument may not adequately discriminate among the seven TPACK
constructs.
Table 19
Precourse Rubric Construct Correlations
Construct PK
TK
PK
1
TK
0.467** 1
CK
0.750** 0.334
PCK
0.765** 0.361*
TPK
0.442** 0.946**
TCK
0.425* 0.898**
TPACK
0.463** 0.896**
Note. n = 35. * p < .05 ** p < .01
CK
PCK
TPK
TCK
TPACK
1
0.698**
0.314
0.309
0.303
1
0.414*
0.257
0.292
1
0.845** 1
0.856** 0.962** 1
Table 20
Postcourse Rubric Construct Correlations
Construct PK
TK
PK
1
TK
0.499** 1
CK
0.746** 0.385*
PCK
0.728** 0.509**
TCK
0.638** 0.649**
TPK
0.511** 0.872**
TPACK
0.717** 0.650**
Note. n = 41. * p < .05 ** p < .01
CK
PCK
TCK
TPK
TPACK
1
0.606**
0.441**
0.434**
0.528**
1
0.609** 1
0.602** 0.579** 1
0.576** 0.882** 0.631** 1
Lesson Plan Content
The analyses in this section were conducted to answer the question: How does the
language differ between artifacts developed before the course and artifacts developed at
59
the end of the course? Matching pre- and postlesson plans (n = 28) were analyzed for
word count, standard linguistic dimensions (e.g., pronouns, articles, and verbs), words
related to psychological processes (e.g., cognitive, affective, and social), and personal
concerns (e.g., work, achievement, and leisure) with the LIWClite7 text analysis
software. Paired-samples t-tests were then conducted for each of the word categories to
evaluate differences between pre- and postlesson plan word usage. In addition to word
count, significant differences were found the following categories: verbs (walk, went,
see), social processes (talk, share, they), affective processes (happy, cry, abandon),
positive emotion (love, nice, sweet), cognitive processes (cause, know, ought), and
achievement (earn, hero, win). Postlesson plans featured significantly more words (950),
t(27) = -2.40, p = 0.02, and a significantly higher percentage usage of verbs (1.74%),
t(27) = 2.88, p = 0.008. Post lesson plans used a significantly lower percentage of words
related to social processes (1.44%) t(27) = 3.20, p = 0.004. Postlesson plans featured a
significantly higher percentage of words related to affective processes (.89%), t(27) = 2.86, p = 0.008, positive emotion (1.22%), t(27) = -5.09, p < 0.001, cognitive processes
(1.86%), t(27) = -2.42, p = 0.02, and achievement (1.01%), t(27) = -2.91, p = 0.007.
These results suggest that participants’ word choice in writing lesson differed from the
prelesson plan to the postlesson plan.
Word frequencies were calculated for the prelesson (n = 35) and postlesson plans
(n = 41) with the KH Coder text analysis software. Table 21 lists the most common
nouns, verbs, and technology-related words found in each lesson plan group. Collocation
statistics were calculated with KH Coder for the most common noun, verb, and
technology-related words in the pre- and postlesson plans and are displayed in Tables 22
60
through 26. The collocation table show that student in the prelesson plan is collocated
with words that describe what the student will do, be asked to do, or will be able to do. In
the postlesson plans, student is situated among similar words but is also collocated with
allow and technology that act as enablers for student actions. Use is the most common
verb in both the pre- and postlesson plans. In both sets of lesson plans, use is collocated
among words that suggest that students are using tools or processes to accomplish tasks.
This context also suggests a focus on tools and processes rather than on the tasks to be
accomplished. The pre- and postlesson plans share the three most common technologyrelated words of video, computer and technology; although in a slightly different order. In
both lesson plan groups, video is collocated with words that suggest that students are
making, watching, and posting videos. Computer is collocated with words that suggest
that accessing computers in labs to complete work. Technology is collocated with words
that suggest that students use technology and that technology is integrated into lessons
and connected to standards.
61
Table 21
Pre- and Postlesson Plan Noun, Verb, and Technology Word Frequencies
Prelesson Plans
Postlesson Plans
n
n
Noun
Verb n Technology n
Noun
Verb n Technology n
student 621 use 232 video 37 student 2224 use 711 technology 335
class 117 write 140 computer 28 lesson 591 create 306 computer 125
lesson 113 work 79 technology 24 technology 335 make 260 video 122
question 110 create 77 Google 16 class
307 learn 227 website 99
group 109 make 73 software 14 activity 296 work 199
blog
76
word 100 read 73 website 12 teacher 267 write 188 Google 68
teacher 95 learn 72
blog
9 information 240 need 164
glog
57
fraction 80 ask 70 SmartBoard 7
group
213 complete 148 PowerPoint 51
objective 67 explain 64 Graphic 6 project 204 include 140 internet 48
activity 66 explore 62 Internet 6
time
192 think 140 screencast 48
information 64 complete 59
Quest
6 assignment 171 allow 139 Edmodo 39
problem 59 follow 46
Web
6 question 171 follow 119 software 35
child
57 discuss 45 internet 5
work
143 explore 117
web
33
number 56 need 43 Edmodo 5
way
135 choose 100 Tumblr 31
point
52 check 39 PowerPoint 5 example 128 know 100 Wordle 27
time
50 share 37 YouTube 4
topic
126 provide 99 Internet 26
cloud
49 include 36
computer 125 help 98
Prezi
24
portion 48 help 32
fraction 122 teach 96
Blog
19
picture 47 identify 32
video
122 require 95 Microsoft 19
concept 46 look 32
grade
118 develop 93 Webquest 15
62
Table 22
Collocation Table for Student
Prelesson Plan
Postlesson Plan
Word
Total
Left*
Right* Word
Total
Left*
write
43
8
35
use
178
41
work
36
1
35
lesson
131
107
ask
35
22
13
able
107
4
able
34
0
34
student
94
48
teacher
34
15
19
work
91
26
check
33
23
10
create
86
21
explore
33
15
18
teacher
86
43
objective
30
30
0
activity
85
64
Note. *Appears to the left or right of the key word within five words
Right*
137
24
103
46
65
65
43
21
Table 23
Collocation Table for Use
Prelesson Plan
Postlesson Plan
Word
Total
Left*
Right* Word
Total
Left*
student
25
23
2
student
185
139
fraction
21
13
8
technology
102
24
strategy
19
9
10
tool
44
9
word
19
3
16
lesson
42
29
follow
17
1
16
spreadsheet
39
12
model
17
0
17
create
34
18
question
17
0
17
information
33
12
rock
16
12
4
fraction
28
17
Note. *Appears to the left or right of the key word within five words
63
Right*
46
78
35
13
27
16
21
11
Table 24
Collocation Table for Video
Prelesson Plan
Postlesson Plan
Word
Total
Left*
Right* Word
Total
Left*
student
6
3
3
student
32
17
make
5
3
2
make
14
9
watch
5
5
0
watch
10
8
YouTube
4
1
3
BrainPOP
8
8
create
3
3
0
activity
7
5
need
3
2
1
need
7
4
show
3
2
1
use
7
3
watch
3
3
0
post
6
6
Note. *Appears to the left or right of the key word within five words
Right*
15
5
2
0
2
3
4
0
Table 25
Collocation Table for Computer
Prelesson Plan
Postlesson Plan
Word
Total
Left*
Right* Word
Total
Left*
use
6
5
1
lab
44
1
day
5
5
0
student
28
20
lab
5
0
5
use
24
20
group
4
3
1
time
10
8
access
3
1
2
work
9
5
cold
3
0
3
station
6
1
information
3
0
3
access
5
3
materials
3
2
1
classroom
5
4
Note. *Appears to the left or right of the key word within five words
64
Right*
43
8
4
2
4
5
2
1
Table 26
Collocation Table for Technology
Prelesson Plan
Postlesson Plan
Word
Total
Left*
Right* Word
Total
Left*
usage
5
0
5
student
85
38
use
5
4
1
use
75
56
concept
4
1
3
lesson
39
12
lesson
4
2
2
standard
32
6
plan
4
2
2
use
27
21
strand
4
2
2
integrate
25
23
create
3
2
1
classroom
24
4
implement
3
3
0
lesson
24
3
Note. *Appears to the left or right of the key word within five words
Right*
47
19
27
26
6
2
20
21
Co-occurrence networks were created with KH Coder for the pre- and postlesson
plan groups as seen in Figures 2 and 3. The unit of analysis for each group of lesson plans
was the paragraph, or more specifically, the networks display co-occurrences of words
within paragraphs. The sizes of nodes in the network were determined by the frequency
of the term in the lesson plan group and line thicknesses were determined by the Jaccard
similarity coefficient. The Jaccard coefficient determines the similarity and diversity
among words (Romesburg, 1984). The communities were assigned different colors and
were determined by the fast greedy modularity algorithm (Clauset, Newman, & Moore,
2004). The network for the prelesson plans features six communities with more than three
nodes. The overall prelesson plan network suggests the lesson plans in this group were
focused on student activities and processes. Within the purple community, the most
frequent word, student, has only two strong connections. Furthermore, the purple
community describes writing that students share with partners. Similarly, the red
community describes fractions and the development of strategies for use with fractions.
65
The orange community describes a reading and story boarding activity. The green
community is focused on student and teacher processes, while the teal community starts
with class processes that lead to a video activity.
The network for the postlesson plans features five communities with three or
more nodes. The network is dominated by the large teal community that is focused on the
relationships between students and their learning activities, teachers, and classroom. The
yellow community describes a learning experience with Tumblr. The purple group
describes an activity on fractions. The blue group is more focused on the instructional
approach than a specific activity. The green group describes the structure of a lesson plan
template. Overall, the postlesson plan network is focused less on activities and more on
students and student learning.
66
Figure 2. Co-occurrence network of the prelesson plan corpus.
67
Figure 3. Co-occurrence network of the postlesson plan corpus.
TPACK Differences by Participant Characteristics
The analyses in this section were conducted to answer the question: How do
students with higher levels of TPACK differ from those with lower levels of TPACK?
Treatment group survey. Welch ANOVA F-tests were conducted to evaluate
differences in TPACK construct scores by gender on the pre- and postsurvey. The
independent variable, gender, included the two levels of male and female. The dependent
68
variables were the seven TPACK constructs. The presurvey F-test was significant for PK,
F(1, 25.42) = 11.07, p = 0.003; TK, F(1, 13.26) = 5.16, p = 0.04; CK, F(1, 18.91) = 5.53,
p = 0.03; and TPK, F(1, 27.46) = 5.03, p = 0.03. Females (n = 45) scored significantly
higher than males (n = 12) on PK (3.49 versus 2.89) and CK (3.48 versus 2.97), while
males scored significantly higher than females on TK (3.42 versus 2.61) and TPK (2.98
versus 2.56). No postsurvey scores were significantly different between males and
females. These results suggest preexisting differences between the genders may be
eliminated following participation in an educational technology course. See Table 27 for
the remainder of the Welch F-test statistics.
Table 27
Gender Differences in Pre- and Post-survey TPACK Constructs
Construct
Pre-PK
Pre-TK
Pre-CK
Pre-PCK
Pre-TCK
Pre-TPK
Pre-TPACK
Post-PK
Post-TK
Post-CK
Post-PCK
Post-TCK
Post-TPK
PostTPACK
Welch’s F
11.065
5.115
5.533
1.535
3.432
5.030
0.212
0.006
3.331
0.000
0.022
2.488
0.363
1.246
df
25.420
13.259
18.913
17.646
31.269
27.456
24.679
10.405
7.175
14.814
17.996
10.003
9.290
14.144
69
p
0.003
0.041
0.030
0.232
0.073
0.033
0.649
0.942
0.110
0.992
0.883
0.146
0.561
0.283
Welch ANOVA F-tests were conducted to evaluate differences in TPACK
construct scores by age group on the pre- and postsurvey. The independent variable, age,
included the four levels: 21 to 29 (n = 33), 30-39 (n = 13), 40 to 49 (n = 5), and 50 to 59
(n = 6). The dependent variables were the seven TPACK constructs. Both the pre- and
postsurvey showed significant differences in TK, F(3, 13.26) = 5.40, p = 0.01 and F(3,
8.07) = 12.05, respectively. Follow-up tests were conducted to evaluate pairwise
differences among the mean scores using Dunnett’s C test. The test identified differences
between the presurvey mean scores of the 21 to 29 age group (2.94) and the 50 to 59 age
group (2.07) and the postsurvey mean scores of the same groups with 3.09 and 1.89,
respectively. These results suggest that younger participants report higher technological
ability than older participants.
Welch ANOVA F-tests were conducted to evaluate differences in TPACK
construct scores among respondents’ self-reported knowledge pedagogy on the
postsurvey (n = 39). The independent variable—knowledge of pedagogy—included four
levels: beginning (n = 4), intermediate (n = 19), advanced (n = 16) and expert (n = 0).
The dependent variables were the seven TPACK constructs. The tests were significant for
PK, F(2, 8.20) = 8.00, p = 0.01; CK, F(2, 8.15) = 10.61, p = 0.005; and PCK, F(2, 7.81)
= 9.14, p = 0.009. Follow-up tests were conducted to evaluate pairwise differences
among the mean scores using Dunnett’s C test. The test identified significant differences
in the mean scores among beginning, intermediate, and advanced groups on CK. The test
also identified significant differences in the mean scores between beginning and
advanced groups on PK and PCK. These results suggest that participants with
70
intermediate or advanced knowledge of pedagogy report higher ability than beginners on
PK, CK, and PCK. See Table 28 for the significant differences in mean scores.
Table 28
Mean Score Differences Among Knowledge of Pedagogy Groups on TPACK Construct
Scores
Construct
Post-PK
Group 1
Beginning
Intermediate
Post-CK
Beginning
Intermediate
Post-TK
Beginning
Intermediate
Post-PCK
Beginning
Intermediate
Group 2
Intermediate
Advanced
Beginning
Advanced
Intermediate
Advanced
Beginning
Advanced
Intermediate
Advanced
Beginning
Advanced
Intermediate
Advanced
Beginning
Mean Difference
-1.44
-1.58*
1.44
-0.14
-1.86*
-2.10*
1.86*
-0.24
0.01
-0.40
-0.01
-0.40
-1.41
-1.63*
1.41
Note. *p < 0.05
Welch ANOVA F-tests were conducted to evaluate differences in TPACK
construct scores among respondents’ self-reported knowledge content on the postsurvey
(n = 39). The independent variable of knowledge of content included four levels:
beginning (n = 5), intermediate (n = 9), advanced (n = 23) and expert (n = 2). The
dependent variables were the seven TPACK constructs. The tests were not significant.
Welch ANOVA F-tests were conducted to evaluate differences in TPACK
construct scores among respondents’ self-reported knowledge level of educational
71
technology on the postsurvey (n = 39). The knowledge of educational technology
independent variable included four levels: beginning (n = 7), intermediate (n = 21),
advanced (n = 11) and expert (n = 0). The dependent variables were the seven TPACK
constructs. The tests were significant for six of the seven constructs: TK, F(2, 17.59) =
15.12, p < 0.001; CK, F(2, 13.19) = 4.35, p = 0.04; TCK, F(2, 13.87) = 8.05, p = 0.005;
TPK, F(2, 14.80) = 13.63, p < 0.001; and TPACK, F(2, 14.95) = 8.77, p = 0.003. Followup tests were conducted to evaluate pairwise differences among the mean scores using
Dunnett’s C test. The test identified significant differences in the mean scores among
beginning, intermediate, and advanced on TK, TCK, and TPACK. The test also identified
significant differences in the mean scores between beginning and intermediate on CK and
beginning and advanced on TPACK. These results suggest that participants with
intermediate or advanced knowledge of educational technology report higher ability than
beginners CK, TK, TCK, and TPACK. See Table 29 for the significant differences in
mean scores.
72
Table 29
Mean Score Differences Among Knowledge of Educational Technology Groups on
TPACK Construct Scores
Construct
Post-PK
Group 1
Beginning
Group 2
Mean Difference
Intermediate
-0.87
Advanced
-0.73
Intermediate
Beginning
0.87
Advanced
0.14
Post-CK
Beginning
Intermediate
-1.43*
Advanced
-1.32
Intermediate
Beginning
1.43*
Advanced
0.11
Post-TK
Beginning
Intermediate
-1.17*
Advanced
-1.68*
Intermediate
Beginning
1.17*
Advanced
-0.50
Post-PCK
Beginning
Intermediate
-0.93
Advanced
-0.82
Intermediate
Beginning
0.93
Advanced
0.11
Post-TCK
Beginning
Intermediate
-1.24*
Advanced
-1.34*
Intermediate
Beginning
1.24*
Advanced
-0.10
Post-TPK
Beginning
Intermediate
-1.35*
Advanced
-1.32*
Intermediate
Beginning
1.35*
Advanced
0.03
Post-TPACK
Beginning
Intermediate
-0.94
Advanced
-1.31*
Intermediate
Beginning
0.94
Advanced
-0.37
Note. * The mean difference is significant at the 0.05 level.
Welch ANOVA F-tests were conducted to evaluate differences in TPACK
construct difference (pre- minus post) scores among respondents’ self-reported
knowledge of pedagogy (n = 39). The knowledge of pedagogy independent variable,
included four levels: beginning (n = 4), intermediate (n = 19), advanced (n = 16), and
73
expert (n = 0). The dependent variables were the seven TPACK construct differences
scores. The tests were significant for CK, F(2, 9.91) = 7.21, p = 0.01, and PCK, F(2,
12.30) = 7.94, p = 0.006. Follow-up tests were conducted to evaluate pairwise differences
among the mean scores using Dunnett’s C test. The test identified significant differences
in the mean scores between beginning and advanced groups on CK (-1.23) and PCK
(-1.17). These results suggest that participants with advanced knowledge of pedagogy
have higher gains over time in CK and PCK than pedagogy beginners.
Welch ANOVA F-tests were conducted to evaluate differences in TPACK
construct difference (pre minus post) scores among respondents’ self-reported knowledge
of content on the postsurvey (n = 39). The independent variable of knowledge of content
included four levels: beginning (n = 5), intermediate (n = 9), advanced (n = 23), and
expert (n = 2). The dependent variables were the seven TPACK construct difference
scores. The test was significant for PCK, F(3, 5.29) = 5.50, p = 0.05. Follow-up tests
were conducted to evaluate pariwise differences among the mean scores using Dunnett’s
C test. The test identified significant differences in the mean scores between the
beginning and advanced group on PCK (-0.92). These results suggest that participants
with advanced knowledge of content have higher gains over time in PCK than content
beginners.
Welch ANOVA F-tests were conducted to evaluate differences in TPACK
construct difference (pre minus post) scores among respondents’ self-reported knowledge
level of educational technology on the postsurvey (n = 39). The knowledge of educational
technology independent variable included four levels: beginning (n = 7), intermediate (n
= 21), advanced (n = 11), and expert (n = 0). The dependent variables were the seven
74
TPACK construct difference scores. The tests were significant for five of the seven
constructs: PK, F(2, 20.50) = 4.54, p = 0.02; CK, F(2, 18.58) = 4.86, p = 0.02; TCK, F(2,
17.58) = 4.56, p = 0.03; TPK, F(2, 19.23) = 5.45, p < 0.01; and TPACK, F(2, 18.68) =
3.94, p = 0.04. Follow-up tests were conducted to evaluate pairwise differences among
the mean scores using Dunnett’s C test. The test identified significant differences in the
mean scores between beginning and intermediate groups on PK (-0.71), CK
(-0.87), TCK (-0.83), and TPK (-0.79). These results suggest that participants with
intermediate knowledge of educational technology have higher gains over time in PK,
CK, TCK, and TPK than educational technology beginners.
Linear regression analyses were conducted to evaluate how years of teaching
experience predicted the seven TPACK construct scores on both the pre- and postsurvey.
The regression was significant for PK scores on the postsurvey, F(1, 50) = 5.20, p = 0.03,
R2 = 0.09. The remaining regression analyses were not significant. These results suggest
that participants with more years of teaching tend to have higher reported ability in PK,
but not in any of the other TPACK constructs.
Multiple regression analyses were conducted to evaluate how well the number of
credit hours related to pedagogy, content, and technology predicted the seven TPACK
constructs on the pre- and postsurvey. The predictors were the number of pedagogyrelated credit hours completed, the number of content-related credit hours completed, and
the number of technology-related content hours completed. The linear combination of
completed credit hours was significantly related to the PK score on the presurvey, F(3,
29) = 4.26, p = 0.01. The R2 coefficient was 0.31, which indicate that approximately 31%
of the variance in the PK score on the presurvey can be accounted for by the linear
75
combination of credit hours taken. Content credit hours, t(31) = -2.32, p = 0.03 and
technology credit hours, t(31) = 3.18, p = 0.003, made significant contributions to the
prediction equation while pedagogy credit hours did not, t(31) = 1.33, p = 0.19. Also of
note is the unexpected negative correlation between content credit hours and PK score on
the presurvey and the negative standardized Beta coefficient (-0.45) in the prediction
equation. Table 30 presents indices to indicate the relative strength of the individual
predictors. These results suggest that participants who took more credit hours in content
and technology report higher PK ability on the presurvey.
Table 30
Bivariate and Partial Correlations of the Predictors with PK Presurvey Score
Predictors
Pedagogy credit hours
Content credit hours
Technology credit hours
Correlation
0.14
-0.10
0.42**
Partial Correlation
0.24
-0.40*
0.51**
Note. * p < 0.05, ** p < 0.01
The linear combination of completed credit hours was not significantly related to
the CK score on the presurvey, F(3,29) = 2.11, p = 0.12. However, technology credit
hours made a significant contribution to the prediction equation, t(31) = 2.07, p = 0.05,
and was significantly correlated, r(31) = .30, p = 0.05, with CK scores on the presurvey.
These results suggest that participants who took more credit hours in technology report
higher PK scores on the presurvey.
The linear combination of completed credit hours was significantly related to the
PCK score on the presurvey, F(3, 29) = 5.53, p = 0.004. The R2 coefficient was 0.36,
which indicates that approximately 36% of the variance in the PCK score on the
76
presurvey can be accounted for by the linear combination of the credit hours taken.
Pedagogy credit hours, t(31) = 3.17, p = 0.004, and technology credit hours, t(31) = 2.22,
p = 0.04, made significant contributions to the prediction equation while content credit
hours did not, t(31) = -1.65, p = 0.11. Table 31 presents indices to indicate the relative
strength of the individual predictors. These results suggest that participants who took
more credit hours in pedagogy and technology report higher PCK ability on the
presurvey.
Table 31
Bivariate and Partial Correlations of the Predictors with PCK Presurvey Score
Predictors
Pedagogy credit hours
Content credit hours
Technology credit hours
Correlation
0.49**
0.14
0.38*
Partial Correlation
0.51**
-0.29
0.38*
Note. * p < 0.05, ** p < 0.01
The linear combination of completed credit hours was significantly related to the
TPACK score on the presurvey, F(3,29) = 4.70, p = 0.009. The R2 coefficient was 0.33,
which indicates that approximately 33% of the variance in the TPACK score on the
presurvey can be accounted for by the linear combination of the credit hours taken.
Pedagogy credit hours, t(31) = 3.14, p = 0.004, and content credit hours, t(31) = -2.72, p
= 0.01, made significant contributions to the prediction equation while technology credit
hours did not, t(31) = 1.80, p = 0.08. Table 32 presents indices to indicate the relative
strength of the individual predictors. These results suggest that participants who took
more credit hours in pedagogy and content reported higher TPACK ability on the
presurvey.
77
Table 32
Bivariate and Partial Correlations of the Predictors with TPACK Presurvey Score
Predictors
Pedagogy credit hours
Content credit hours
Technology credit hours
Note. * p < 0.05, ** p < 0.01
Correlation
0.37*
-0.10
0.23
Partial Correlation
0.50**
-0.45*
0.32
The linear combination of completed credit hours was significantly related to the
PK score on the postsurvey, F(3, 29) = 3.93, p = 0.02. The R2 coefficient was 0.28, which
indicates that approximately 28% of the variance in the PK score on the postsurvey can
be accounted for by the linear combination of credit hours taken. Technology credit
hours, t(31) = 3.32, p = 0.002, made a significant contribution to the prediction equation
while pedagogy credit hours, t(31) = 0.55, p = 0.59, and content credit hours, t(31) = 1.72, p = 0.10 did not. Table 33 presents indices to indicate the relative strength of the
individual predictors. These results suggest that participants who took more credit hours
in technology reported higher PK ability on the postsurvey.
Table 33
Bivariate and Partial Correlations of the Predictors with PK Postsurvey Score
Predictors
Pedagogy credit hours
Content credit hours
Technology credit hours
Note. * p < .05, ** p < .01
Correlation
0.09
-0.05
0.46**
Partial Correlation
0.10
-0.30
0.52**
The linear combination of completed credit hours was significantly related to the
CK score on the postsurvey, F(3, 29) = 3.18, p = 0.04. The R2 coefficient was 0.24, which
indicated that approximately 24% of the variance in the CK score on the postsurvey can
78
be accounted for by the linear combination of credit hours taken. Technology credit
hours, t(31) = 2.79, p = 0.009, made a significant contribution to the prediction equation
while pedagogy credit hours, t(31) = 0.99, p = 0.33, and content credit hours, t(31) =
-1.87 did not. Table 34 presents indices to indicate the relative strength of the individual
predictors. These results suggest that participants who took more credit hours in
technology reported higher CK ability on the postsurvey.
Table 34
Bivariate and Partial Correlations of the Predictors with CK Postsurvey Score
Predictors
Pedagogy credit hours
Content credit hours
Technology credit hours
Note. * p < 0.05, ** p < 0.01
Correlation
0.13
-0.07
0.39*
Partial Correlation
0.18
-0.32
0.45**
The linear combination of completed credit hours was not significantly related to
the PCK score on the postsurvey, F(3,29) = 2.30, p = 0.10. However, technology credit
hours made a significant contribution to the prediction equation, t(31) = 2.19, p = 0.04,
and was significantly correlated, r(31) = .33, p = 0.03, with PCK scores on the
postsurvey. These results suggest that participants who took more credit hours in
technology reported higher PCK ability on the postsurvey.
The linear combination of completed credit hours was significantly related to the
TPACK score on the postsurvey, F(3,29) = 2.94, p = 0.05. The R2 coefficient was 0.23,
which indicates that approximately 23% of the variance in the TPACK score on the
postsurvey can be accounted for by the linear combination of the credit hours taken.
Content credit hours, t(31) = -2.63, p = 0.01, and technology credit hours, t(31) = 2.04, p
= 0.05, made significant contributions to the prediction equation while pedagogy credit
79
hours did not, t(31) = 1.35, p = 0.19. Table 35 presents indices to indicate the relative
strength of the individual predictors. These results suggest that participants who took
more credit hours in content and technology reported higher TPACK ability on the
postsurvey.
Table 35
Bivariate and Partial Correlations of the Predictors with TPACK Postsurvey Score
Predictors
Pedagogy credit hours
Content credit hours
Technology credit hours
Correlation
0.09
-0.24
0.22
Partial Correlation
0.24
-.043*
0.35*
Note. * p < 0.05, ** p < 0.01
Multiple regression analyses were conducted to evaluate how well the number of
hours per day spent using technology predicted the seven TPACK constructs on the
presurvey. The predictors were the number of hours per day spent using technology for
professional purposes and the number of hours per day spent using technology for
personal purposes. The regression equation with the presurvey PK score criterion variable
was significant, F(2, 33) = 4.38, p = 0.02. The R2 coefficient was 0.21, which indicates
that approximately 21% of the variance in the PK score on the presurvey can be
accounted for by the linear combination of hours per day spent using technology.
Professional hours spent, t(33) = 2.96, p = 0.006, made a significant contribution to the
prediction equation while personal hours did not, t(33) = -.75, p = 0.46. These results
suggest that participants who spent more hours per day using technology for professional
purposes reported higher PK ability on the presurvey.
80
The regression equation with the presurvey CK score criterion variable was
significant, F(2, 33) = 4.00, p = 0.03. The R2 coefficient was 0.20, which indicates that
approximately 20% of the variance in the CK score on the presurvey can be accounted
for by the linear combination of hours per day spent using technology. Professional hours
spent, t(33) = 2.79, p = 0.009, made a significant contribution to the prediction equation
while personal hours did not, t(31) = -1.17, p = 0.25. These results suggest that
participants who spent more hours per day using technology for professional purposes
reported higher CK ability on the presurvey.
The regression equation with the presurvey PCK score criterion variable was
significant, F(2, 33) = 1.52, p = 0.02. The R2 coefficient was 0.21, which indicates that
approximately 21% of the variance in the PCK score on the presurvey can be accounted
for by the linear combination of hours per day spent using technology. Professional hours
spent, t(33) = 2.77, p = 0.009, made a significant contribution to the prediction equation
while personal hours did not, t(33) = -1.78, p = 0.09. These results suggest that
participants who spent more hours per day using technology for professional purposes
reported higher PCK ability on the presurvey.
The regression equation with the presurvey TCK score criterion variable was
significant, F(2, 33) = 4.11, p = 0.03. The R2 coefficient was 0.20, which indicates that
approximately 20% of the variance in the TCK score on the presurvey can be accounted
for by the linear combination of hours per day spent using technology. Professional hours
spent, t(33) = 2.72, p = 0.01, made a significant contribution to the prediction equation
while personal hours did not, t(33) = .22, p = 0.83. These results suggest that participants
81
who spent more hours per day using technology for professional purposes reported higher
TCK ability on the presurvey.
The linear combination of hours spent using technology personally and
professionally was not significantly related to the TPK score on the presurvey, F(2, 33) =
2.90, p = 0.07. However, professional hours spent made a significant contribution to the
prediction equation, t(31) = 2.33, p = 0.03, and was significantly correlated, r(33) = .37, p
= 0.01, with TPK scores on the presurvey. These results suggest that participants who
spent more hours per day using technology for professional purposes reported higher
TPK ability on the pre-survey.
The linear combination of hours spent using technology personally and
professionally was not significantly related to the TPACK score on the presurvey, F(2,
33) = 2.70, p = 0.08. However, professional hours spent made a significant contribution
to the prediction equation, t(33) = 2.29, p = 0.03, and was significantly correlated, r(33) =
.37, p = 0.01, with TPACK scores on the postsurvey. These results suggest that
participants who spent more hours per day using technology for professional purposes
reported higher TPACK ability on the presurvey.
The linear combination of hours spent using technology personally and
professionally was not significantly related to the PK score on the postsurvey, F(2, 34) =
2.25, p = 0.12. However, professional hours spent made a significant contribution to the
prediction equation, t(34) = 2.11, p = 0.04, and was significantly correlated, r(34) = 0.34,
p = 0.02, with TPACK scores on the postsurvey. These results suggest that participants
who spent more hours per day using technology for professional purposes reported higher
PK ability on the postsurvey.
82
The linear combination of hours spent using technology personally and
professionally was not significantly related to the TK score on the postsurvey, F(2, 34) =
2.73, p = 0.08. Neither predictor made a significant contribution to the prediction
equation, however, both professional time spent, r(34) = .31, p = 0.03, and personal time
spent, r(34) = .27, p = 0.05 were significantly correlated with TK scores on the
postsurvey. Although the linear combination of the two predictors did not significantly
predict TK scores, these results suggest that participants who spent more hours using
technology professionally or personally tended to report higher TK ability on the
postsurvey.
The regression equation with the postsurvey CK score criterion variable was
significant, F(2, 34) = 4.64, p = 0.02. The R2 coefficient was 0.22, which indicates that
approximately 22% of the variance in the CK score on the postsurvey can be accounted
for by the linear combination of hours per day spent using technology. Professional hours
spent, t(34) = 3.02, p = 0.005, made a significant contribution to the prediction equation
while personal hours did not, t(34) = -.25, p = 0.80. These results suggest that participants
who spent more hours per day using technology for professional purposes reported higher
CK ability on the postsurvey.
The linear combination of hours spent using technology personally and
professionally was not significantly related to the PCK score on the postsurvey, F(2, 34)
= 2.42, p = 0.10. However, professional hours spent made a significant contribution to the
prediction equation, t(34) = 2.14, p = 0.04, and was significantly correlated, r(34) = 0.35,
p = 0.02, with PCK scores on the postsurvey. These results suggest that participants who
83
spent more hours per day using technology for professional purposes reported higher
PCK ability on the postsurvey.
The regression equation with the postsurvey TCK score criterion variable was
significant, F(2, 34) = 6.81, p = 0.003. The R2 coefficient was 0.29, which indicates that
approximately 29% of the variance in the TCK score on the postsurvey can be accounted
for by the linear combination of hours per day spent using technology. Professional hours
spent, t(34) = 3.37, p = 0.002, made a significant contribution to the prediction equation
while personal hours did not, t(34) = 0.73, p = 0.47. These results suggest that
participants who spent more hours per day using technology for professional purposes
reported higher TCK ability on the postsurvey.
The regression equation with the postsurvey TPK score criterion variable was
significant, F(2, 34) = 4.56, p = 0.02. The R2 coefficient was 0.21, which indicates that
approximately 21% of the variance in the TPK score on the postsurvey can be accounted
for by the linear combination of hours per day spent using technology. Professional hours
spent, t(34) = 2.77, p = 0.009, made a significant contribution to the prediction equation
while personal hours did not, t(34) = 0.58, p = 0.57. These results suggest that
participants who spent more hours per day using technology for professional purposes
reported higher TPK ability on the postsurvey.
The regression equation with the postsurvey TPACK score criterion variable was
significant, F(2, 34) = 4.25, p = 0.02. The R2 coefficient was 0.20, which indicates that
approximately 20% of the variance in the TPACK score on the postsurvey can be
accounted for by the linear combination of hours per day spent using technology.
Professional hours spent, t(34) = 2.80, p = 0.008, made a significant contribution to the
84
prediction equation while personal hours did not, t(34) = 0.18, p = 0.86. These results
suggest that participants who spent more hours per day using technology for professional
purposes reported higher TPACK ability on the postsurvey.
Lesson plan rubric scores. Welch ANOVA F-tests were conducted to evaluate
differences in TPACK construct scores by gender on the pre- and postcourse rubric. The
independent variable gender included the two levels of male and female. The dependent
variables were the seven TPACK constructs. The precourse F-test was significant for TK,
F(1, 30.26) = 10.54, p = 0.003; TPK, F(1, 27.48) = 7.00, p = 0.01; TCK, F(1, 22.00) =
12.14, p = 0.002; and TPACK, F(1, 19.63) = 11.05, p = 0.003. Females (n = 26) scored
significantly higher than males (n = 7) on TK (2.17 versus 1.36), TPK (2.11 versus 1.41),
TCK (2.74 versus. 1.61), and TPACK (2.63 versus 1.61). No postcourse rubric scores
were significantly different between males and females. These results suggest preexisting
differences between the genders may be eliminated following participation in an
educational technology course. See Table 36 for the remainder of the Welch F-test
statistics.
85
Table 36
Gender Differences in Pre- and Postlesson Plan Rubric Construct Scores
Construct
Pre-PK
Pre-CK
Pre-TK
Pre-PCK
Pre-TCK
Pre-TPK
Pre-TPACK
Post-PK
Post-TK
Post-CK
Post-PCK
Post-TCK
Post-TPK
Post-TPACK
Welch’s F
1.82
1.20
10.53
0.25
12.14
6.99
11.05
3.45
0.06
0.40
1.28
0.45
0.88
0.01
df1, df2
p
1, 8.82
1, 9.16
1, 30.26
1, 7.90
1, 22.00
1, 27.48
1, 19.63
1, 11.19
1, 13.53
1, 7.81
1, 8.00
1, 7.76
1, 12.23
1, 8.28
0.21
0.30
0.00
0.63
0.00
0.01
0.00
0.09
0.81
0.54
0.29
0.52
0.37
0.94
Welch ANOVA F-tests were conducted to evaluate differences in pre- and
postcourse rubric construct scores among respondents’ self-reported knowledge of
pedagogy (n = 34). The independent variable of knowledge of pedagogy included four
levels: beginning (n = 4), intermediate (n = 15), advanced (n = 15), and expert (n = 0).
The dependent variables were the seven TPACK constructs. None of the tests were
significant for the precourse rubric constructs. The test was significant for the postcourse
rubric TK score, F(2, 17.12) = 8.43, p = 0.003. Follow-up tests were conducted to
evaluate pairwise differences among the mean scores using Dunnett’s C test. The test
identified significant differences in the mean TK scores between the beginning and
advanced groups (-1.12). These results suggest that participants with advanced
knowledge of pedagogy demonstrate higher TK proficiency than beginners.
86
Welch ANOVA F-tests were conducted to evaluate differences in pre- and
postcourse rubric construct scores among respondents’ self-reported knowledge of
content (n = 34) and educational technology (n = 34). The independent variable of
knowledge of content included four levels: beginning (n = 4), intermediate (n = 9),
advanced (n = 19), and expert (n = 2). The independent variable of knowledge of
educational technology included four levels: beginning (n = 7), intermediate (n = 17),
advanced (n = 10), and expert (n = 0). The dependent variables were the seven TPACK
constructs. None of the tests were significant for knowledge of content. For knowledge of
educational technology, the tests were significant for postcourse rubric scores on PK,
F(16.63) = 7.65, p = 0.004, and CK, F(2, 15.09) = 4.16, p = 0.04. Follow-up tests were
conducted to evaluate pairwise differences among the mean scores using Dunnett’s C
test. The test identified significant differences in the mean scores between the beginning
and intermediate groups for PK (-0.92) and CK (-0.91). These results suggest that
participants with intermediate knowledge of educational technology demonstrate higher
PK and CK proficiency than beginners.
Linear regression analyses were conducted to evaluate how years of teaching
experience predicted the seven TPACK construct scores on both the pre-and postcourse
rubric scores. The regression was significant for TK scores on the precourse rubric, F(1,
29) = 4.65, p = 0.04, R2 = 0.14. The remaining regression analyses were not significant.
These results suggest that participants with more teaching experience demonstrated
higher TK proficiency on the precourse lesson plan than those with less teaching
experience.
87
In addition, multiple regression analyses were conducted to evaluate how well the
number of pedagogy, content, and technology-related credit hours predicted the seven
TPACK constructs on the pre- and postcourse rubric scores. The predictors were the
number of pedagogy-related credit hours completed, the number of content-related credit
hours completed, and the number of technology-related content hours completed. The
linear combination of completed credit hours was not significantly related to any of the
TPACK construct scores. These results suggest the number of credit hours taken did not
affect the demonstration of TPACK construct proficiency in lesson plans.
Multiple regression analyses were conducted to evaluate how well the number of
hours per day spent using technology predicted the seven TPACK constructs from the
pre- and postcourse rubric. The predictors were the number of hours per day spent using
technology for professional purposes and the number of hours per day spent using
technology for personal purposes. The regression equation with the precourse rubric CK
score criterion variable was significant, F(2, 25) = 4.78, p = 0.02. The R2 coefficient was
0.28, which indicates that approximately 28% of the variance in the CK score on the
precourse rubric can be accounted for by the linear combination of hours per day spent
using technology. Personal hours spent, t(25) = 2.76, p = 0.01 made a significant
contribution to the prediction equation. These results suggest that participants who spent
more hours per day using technology for personal purposes demonstrated higher CK
proficiency on the precourse lesson plan.
Teaching and Learning Experiences
The analyses in this section were conducted to answer two questions: 1) How do
students with higher levels of TPACK differ from those with lower levels of TPACK?
88
and 2) What teaching and learning experiences influenced students’ knowledge of and
practice in teaching?
Interview participants were purposefully selected based on their overall presurvey
mean score, which ranged from 1.26 to 3.74. Participants were categorized into low (n =
3), middle (n = 5), and high (n = 3) scoring groups for analysis. Inductive data analysis
was performed on each of the 11 interview transcripts by first employing open coding
and then axial coding to develop codes, categories, and themes (Lewins & Silver, 2007).
During the open coding phase, descriptive codes were developed based on the emerging
information from the interview transcripts. During the axial coding phase, the descriptive
codes were grouped into categories and themes based on the TPACK framework. The
results from the three groups are organized by theme, which are defined by the core
constructs in TPACK: pedagogy, content, and technology.
Low group. The low-scoring group included Christa, a substitute kindergarten
through twelfth grade (K-12) teacher with 15 years of part-time teaching experience;
Keanu, a communications lecturer at the undergraduate level with 10 years of teaching
experience; and Tatiana, a substitute middle school teacher with one year of part-time
teaching experience. Christa had the lowest mean presurvey score of any study
participant with a 1.26. Keanu had a presurvey score of 2.71 and Tatiana had a presurvey
score of 2.89.
Pedagogy. Christa and Tatiana described their typical lessons in broad strokes,
which included warm-up activities, presenting notes, small group work, responding to
student questions, and students completing homework in class. Keanu described the
PowerPoint slides and video links that he posts in his Blackboard course site.
89
Furthermore, the group described their lesson planning influences that included internet
resources, textbooks, and program goals and objectives. While Tatiana said that her
mentor teacher during her student teaching experience had an influence on her lesson
planning, Christa stated that full-time teachers were not willing to share ideas with her as
a substitute.
Christa was not confident in her knowledge of effective teaching methods
although she did suggest that student collaboration in small groups and project-based
activities were effective. Keanu and Tatiana also said that project-based learning was an
effective method when compared with passive learning methods. Tatiana thought that
one-on-one instruction was the most effective method, whether the pairing is teacher-tostudent or student-to-student. The group learned about effective teaching methods from
their graduate courses, from the internet, and from their student teaching experience. The
group found that these methods were effective by testing them out in the classroom, but
only Keanu mentioned that the methods were effective as judged by student evaluations
and assessment scores.
Content. The group was drawn to their respective content areas through their
experiences in middle school or junior high school. Significant influences or resources in
their content areas included online resources, teacher observations, professional
development, and college content instructors.
Technology. Outside of teaching, the group said that they enjoyed using
technology in personal and work contexts. They largely learned to use those technologies
through individual practice or trial and error; however, they learned to use educational
technologies through formal coursework. The group agreed that teaching with technology
90
was a beneficial practice. In particular, synchronous communication technologies and
technologies that students could access outside of class could enrich students’ experience.
They did have some practical concerns regarding a lack of access to technology and were
skeptical of the value of educational games. Although they held these opinions, only
Keanu had implemented technology in his teaching. He had taken advantage of
presentation technologies including slides, video, and animations. Keanu’s graduate level
content work influenced his use of technology because it fit with the content of the course
and the theories used in the course.
Middle group. The mid-scoring group included the following participants:
Karina, a sixth grade math and science teacher with three years of experience; Erica, a
seventh grade social studies teacher with six years of experience; Alana, a math tutor and
substitute teacher for grades kindergarten through ninth with four and a half years
experience as a tutor and two years of part-time teaching experience; Maya, a Title I math
teacher for grades kindergarten through eighth with six years of experiences; and Sally, a
current online teacher trainer and former history teacher for grades 10 and 11 with three
years of experience. Karina had the lowest mean pre-survey score of the mid group with a
2.90. Erica had a pre-survey score of 2.93, Alana had a pre-survey score of 2.89, Maya
had a pre-survey score of 3.23, and Sally had a pre-survey score of 3.24
Pedagogy. Typical lessons for Karina, Erica, and Maya featured warm-up
activities or assessments, presentations or explorations, independent or small group work,
partner questions or class discussion, and a summary activity or an essential question to
close the lesson. Their lessons were influenced by school-adopted curriculum programs
or curriculum programs learned through undergraduate and graduate coursework. As a
91
substitute teacher, Alana described English students watching content-related videos and
completing online vocabulary activities, while history students read and answered
questions and math students complete math worksheets. Alana’s lesson planning was
influenced by her mentor teacher’s use of the “I do, we do, you do” model. Sally worked
with online teachers individually and did not have set lesson activities, although her
actions in that context were influenced by her online graduate courses.
The group discussed a variety of effective teaching methods including structured
discussion, partner questioning, small group instruction, and accessible online materials
with direct access to an instructor. Aside from Sally, the group learned their effective
methods through school professional development sessions and from their mentor
teachers in their student teaching experiences. Sally learned her online skills through her
graduate coursework. The group found that these methods were effective by testing them
out in the classroom. Only Erica mentioned that the methods were effective as judged by
individual student assessment scores and class mastery goals.
Content. Major influences on the groups’ content knowledge included
professional development, content-specific resources, mentor teachers and teacher
colleagues, formal courses, and content-specific curriculum programs. The group was
largely drawn to their content area through a long-standing personal interest in the
subject. Only Karina came to her content area by accident when her school needed a math
teacher.
Technology. Outside of teaching, the group said that they enjoyed using
technology in personal and work contexts. They largely learned to use those technologies
through individual practice or trial and error. Only Karina and Alana said that they
92
learned to use technologies through professional development or formal coursework.
While teaching with technology was seen by the group in a positive light, they stressed
the importance of appropriate use. The group saw educational technology as a way to
engage students and enable individualized instruction, but warned that technology should
complement, not replace teaching. Erica said that education is too quick to follow
technology trends despite a lack of evidence to prove effectiveness. In purist of a grant
that provided a class a set of iPads, Erica’s secondary research suggested iPads were not
as effective as laptops due to the quality of applications (or apps) available on the device.
The group developed their opinions on teaching with technology through
coursework, professional development, and experience. Maya said that educational
technology was not a focus in her undergraduate teacher education program. She felt she
was not implementing technology effectively until she completed a graduate course in
educational technology. The group’s initial integrations of technology involved the use of
presentation technologies like PowerPoint and document cameras, as well as selfcontained, content-specific practice software like Plato. The group also used technology
to gain student attention and organize information. They were influenced to take teaching
with technology further by professional development sessions, graduate coursework in
educational technology, and observations of other teachers.
High group. The high-scoring group included the following participants: John, a
current achievement advisor for kindergarten and first grade for one school year and
former math and science teacher for grades 7 and 8 for six years; Malia, a world studies
and government teacher for grades 10 and 12 with 4 years of experience; and Laura, a
sixth grade online Earth science teacher with nine years of experience. John had the
93
lowest mean pre-survey score of the high group with a 3.35. Malia had a pre-survey score
of 3.40 and Laura had a pre-survey score of 3.73.
Pedagogy. A typical lesson for John and Malia included a warm-up activity,
direct instruction or a demonstration, and independent work or an application activity.
The two often integrated a SMART Board, PowerPoint slides, and a student response
system into their lessons. As an online teacher, Laura conducted synchronous online
sessions that complement offline textbook lessons. First she reviewed previous material,
then previewed and discussed current material. She designed lessons to engage higherorder thinking skills with methods connected to work by Robert J. Marzano and
Benjamin S. Bloom. John’s lessons conform to the district mandated lesson structure,
while Malia’s lessons were influenced by teaching experience and observing other
teachers and professors. Laura’s lessons were influenced by the common core standards
and structured around established teaching frameworks and strategies.
The group’s most effective teaching methods included small-group discovery
learning, modeling, and generating hypotheses. The group said that giving students
responsibility for their learning and enabling students to be creative were also effective
methods. The group learned their effective teaching methods through coursework,
professional development, and teaching experience. John and Malia said that they found
their methods to be effective because students were more engaged in learning. Laura
chose her methods based on evidence of effectiveness from educational research studies.
Content. The group’s content knowledge was most influenced by formal
undergraduate and graduate coursework, workshops, and independent study. John said
that collaborating with other math and science teachers during shared planning time was
94
also an influence. The group members said they all had long-standing personal interest in
their content areas.
Technology. Outside of teaching, the group said that they enjoyed using
technology in personal and work contexts. They largely learned to use those technologies
through individual practice or trial and error. All the members said that teachers should
use technology to teach and should expose students to technology. John said that teachers
should not have the choice to avoid teaching with technology because they were afraid to
use it or were uncomfortable using it. He suggested that wary teachers start out
completing administrative tasks to become more comfortable with technology. Malia and
Laura said that, while beneficial, technology could be distracting and was not appropriate
for all occasions. Malia witnessed students that were distracted from the learning
objective because they were struggling with the technology. Laura had seen how digital
representations could help students learn science concepts, but if the technology became
a barrier she advised her students to use paper and pencil in appropriate situations.
The group’s initial integrations of technology involved the use of early web pages
as resources and presentation technologies like ActivBoard, ActivInspire, and
PowerPoint. They decided to integrate those technologies because they saw the
technology was effective with other teachers and made lectures more engaging. John said
that his first attempts at technology integration were simplistic, but they became more
advanced and interactive with practice. John and Malia said that their current technology
practices were influenced by their graduate coursework. Additionally, Malia was guided
by the belief that students need to learn to use technologies to be productive citizens.
95
Laura’s practice is influenced by colleagues, workshops, and through regular reading of
educational technology sites like Edutopia and Mind/Shift.
96
Chapter 4
DISCUSSION
Discussion of the Main Purpose
The purpose of this study was to investigate the initial shape of Technological
Pedagogical Content Knowledge (TPACK) in graduate teacher education students and
the influence of an educational technology course on the malleability of students’
TPACK as measured by perceived ability, artifacts of practice, and personal experience.
Table 37, a joint display, presents findings from the quantitative and qualitative phases of
this study so that they are easily compared. This chapter will explore these and other
findings along with their implications, limitations, and related directions for future
research.
Table 37
Quantitative and Qualitative Phase: Joint Display
Method
Data Set
PK
CK
TK
PCK
TPK
TCK
TPACK
Quantitative Treatment 0.25* 0.39** 0.38* 0.26* 0.68** 0.67** 0.45**
group
survey
gains
Control
-0.05 -0.18
-0.16
0.16
0.28
0.14
0.16
group
survey
gains†
Rubric
-0.24 -0.31 1.30** -0.01 1.28** 0.84**
0.64*
gains
Lesson
Postcourse lesson plans have a higher percentage of words
plan text related to cognitive processes and achievement than precourse
lesson plans. Both pre- and postcourse lesson plans share similar
frequently used words including the most frequently used
technology-related words.
Qualitative Interviews Participants with higher levels of TPACK elaborate on
pedagogical practices, participate in opportunities to improve
97
their practice, and have a positive, yet nuanced perspective
about teaching with technology. All interview participants were
influenced by coursework, mentor teachers, professional
development, internet resources, curricular resources, and
content-area standards.
Note. *p < 0.05, **p < 0.01, † Participants not enrolled in an educational technology
course.
What is the level of TPACK, reported and demonstrated, among graduate
teacher education students? The presurvey mean scores indicate that the graduate
students in the treatment group were confident in their ability to produce lessons in their
content area, vary their teaching strategies, and plan the sequence of topics in their
content area. Furthermore, they were confident in their ability to adjust their methods
based on student performance, assist students in identifying conceptual connections in
their content area, and anticipating misconceptions students may have about content-area
concepts. In fact, the 10 highest mean item scores on the presurvey belonged to items
related to PK, CK and PCK; the same three TPACK constructs with the highest subscale
scores on the presurvey. The knowledge of teaching processes—methods and strategies
(PK), knowledge of subject matter (CK) —and the knowledge of teaching process—
methods and strategies specific to a subject matter (PCK) —are constructs key to
teaching in traditional contexts. It is not surprising, then, that the treatment group
reported a high ability on these items and constructs, because the group mostly comprised
teachers with an average of 5.3 years of teaching experience, which is beyond the four
years of experience at which gains in effectiveness tend to flatten out (Center for
Education Policy Research, 2010).
The presurvey mean scores, however, did not indicate that the treatment group
was confident in encouraging student interactivity with technology (TPK), moderating
98
web-based student interactions (TPK), or predicting students’ skills or understanding of a
topic with technology (TPACK). Furthermore, they were wary about assisting students
with troubleshooting technology (TK), integrating web-based courseware or applications
(TCK), and troubleshooting hardware issues on their own (TK). The low confidence the
preceding items was also reflected by the low construct mean scores of TPK, TK, and
TPACK. These low construct scores support the hypothesis that there are teachers who
continue to lack the knowledge and ability to work with various technologies (TK), to
teach with technology (TPK), or teach a specific subject matter with technology
(TPACK).
When compared to the treatment group, the presurvey mean scores for the control
group indicate that the control group was confident in their abilities in similar areas. The
control group reported high ability in planning the scope and sequence of topics in their
content (CK) and in producing lessons in their content area (PCK). They also shared their
confidence in their ability to adjust their methods based on student performance (PK).
Unlike the treatment group, the control group did report higher ability in using
technological representations to demonstrate content area concepts (TCK) and in
addressing software issues (TK). The control group differentiated itself from the
treatment group with the control group’s highest scoring construct as TK. The control
group shared two high constructs groups with the treatment group, PK and CK. As with
the treatment group, the majority of the control group members were teachers who had
4.6 years of teaching experience, so the high reported ability in PK and CK is not
surprising. The high reported ability in TK may be explained by members of the group
who responded that they were enrolled in one or more educational technology courses
99
during the course of the study. It may be that these members had more experience or
training with technology than did the members of the treatment group.
The precourse lesson plan rubric mean scores indicate that the graduate students
in the treatment group prepared lesson plans that included evidence of content knowledge
(CK), clear lesson objectives (CK), appropriate teaching strategies (PCK), and
meaningful content (PK). Just as with the presurvey, the treatment group demonstrated
higher competence in items related PK, CK, and PCK. These three constructs were also
the three highest subscale scores from the prelesson plan rubric. Again, high scores in
these construct subscales and related items are not surprising given the characteristics of
the treatment group. The consistency between the survey and rubric’s high scoring
subscales is a promising sign of construct validity.
The precourse lesson plan rubric mean scores, however, did not indicate that the
treatment group prepared lesson plans that provided a rational the technology used (TK),
provided a rationale for the specific technology choice featured in the lesson (TPK),
demonstrated understanding of technology (TK), or used technologies that enhance
teaching methods (TPK). These items were related to two of the three subscales with the
lowest scores, TK and TPK. The third lowest subscale score was TPACK. These three
subscales matched the low scoring subscales on the presurvey as well, which provides
more evidence for instrument construct validity.
Do TPACK levels, reported and demonstrated, change after participation in
an educational technology course? The treatment group saw all mean item and subscale
scores increase from pre- to postsurvey with all but five items making statistically
significant increases. Items related to implementing technology to support teaching
100
methods (TPK), creating effective representations with technology (TCK), and
moderating web-based student interactivity (TPK) making the largest increases. All of the
subscale mean score increases were significant with the largest increases seen in TCK,
TPK, and TPACK, which had effect sizes that ranged from 0.61 to 1.14. The medium to
large increases seen in these subscale scores reflect the focus of the course on changing
how content is represented with technology (TCK), teaching with technology (TPK), and
teaching a specific subject matter with technology (TPACK). This suggests that the
course was effective in meeting its learning goals.
The smallest and nonsignificant increases were items related to producing lessons
with an appreciation for the topic (PCK), adjusting methods based on student
performance (PK), and addressing software issues (TK). The three subscale scores with
the smallest, yet significant, gains were PK, PCK, and TK. Again, these results are
consistent with the goals of the course, because it was not focused on general pedagogy,
content-specific pedagogy, or the technical aspects of hardware or software.
Two item mean score increases from the control group’s pre- and postsurvey were
significant. The items were related to distinguishing appropriate problem solving
attempts by students (PCK), and to building student knowledge and skills with
technology (TPK). The increases on the PCK and TPK subscales were also significant
with medium effect sizes. Because the control group members were education graduate
students, most of whom were teachers and enrolled in various education courses, it would
not be surprising to see significant increases in PK, CK, or PCK. The significant increase
in TPK was surprising, at least initially. The courses that the control group members
could potentially be enrolled in included educational technology courses. In fact, nine
101
members of the control group were enrolled in an educational technology course during
the semester the surveys were administered. Excluding these nine members from analyses
did not result in significant differences from pre- to postsurvey for any items or
subscales. This result suggested that the members enrolled in the educational technology
courses accounted for the significant increases in TPK and PCK. The results above in
conjunction with the results that suggest the presurvey subscale scores between the
treatment and control group were not significantly different. Taken together, it is possible
to conclude that participation in an educational technology course contributed to the
significant increases in the treatment groups TPACK item and subscale survey scores.
All mean item rubric item scores increased significantly from pre- to postsurvey
for items related to TK, TPK, TCK, and TPACK. Items related to student-centered
technology (TPK), providing a rationale for integrating technology (TK), integrating
technology (TK), and implementing technologies that enhance teaching methods (TPK)
had the largest increases. Effect sizes for the significantly different subscale scores
ranged from 0.57 to 1.26. Similar to the survey findings, the medium to large increases
seen in these subscale scores reflects the learning goals for the educational technology
course and suggests that the course was effective. Furthermore, the survey and rubric
measured large gains on the same subscales, excluding TK from the rubric. This result
provides additional evidence for the measures’ construct validity. The discrepancy
between the two measures in TK scores can be explained by the different focus of the TK
items. While the rubric measures TK largely by the inclusion of technology in the lesson
plan, the survey measures the ability to troubleshoot technical issues. The nonsignificant
102
differences from pre- to postrubric scores found in items related to PK, CK, and PCK
also reflected the course goals were.
Are self-reported levels of TPACK evident in artifacts of teacher practice?
Previous results showed that high scoring and low scoring TPACK constructs as a group
were measured similarly between the survey and lesson plan rubric. Furthermore, the
survey and rubric both saw the same group of constructs with the highest gain from preto postcourse. These results seemed to pave a path toward convergent construct validity
between the two measures; however, there was only one significant construct correlation
between the presurvey scores and precourse rubric scores. Similarly, the postsurvey
scores and postcourse rubric scores revealed only three significant construct correlations.
The significant correlation pairs on both the premeasure (presurvey PK with prerubric
TCK) and postmeasure (postrubric TK with postsurvey PK, CK, and PCK) were not
interpretable. If the survey and rubric were measuring the same constructs, high and
significant correlations would be expected along the diagonal of the correlation matrix
(e.g., survey PK with rubric PK) demonstrating convergent validity. Within the TPACK
framework, there is no theoretical explanation for a relationship between PK and TCK or
why TK would be related to PK, CK, or PCK. Although both instruments’ measures of
the preliminary state and growth of the treatment group were consistent with previous
research and the learning goals of the course, these correlation results suggest that the
survey and rubric have poor convergent validity. Many researchers have developed new
TPACK measures (Koehler, Shin, & Mishra, 2011), but little work has been done to
cross-validate these measures.
103
How does the language differ between artifacts developed before the course
and artifacts developed at the end of the course? Postcourse lesson plans were found
to use more words and a higher percentage of verbs than the precourse lesson plans. The
raw word count differences are not a practical finding, because the postcourse lesson
plans were developed as a final project; therefore, participants likely included more detail
than they typically would for a lesson plan they would prepare for classroom use (Harris,
Grandgenett, & Hofer, 2010). The higher percentage of verbs used in the postcourse
lesson plans could be related to a higher percentage of lesson plan objectives or a higher
percentage of teacher or student activities. The higher percentage of verb usage is likely
related to more detailed postcourse lesson plans. The postcourse lesson plans also
featured a higher percentage of words related to cognitive processes (cause, know, ought)
and achievement (earn, hero, win). The increase in words related to cognitive processes
could be related to lessons that were planned to elicit higher order thinking skills from
students and that were more student-centered. The increase in words related to
achievement could be related to teachers attempting to invest their students in academic
success or to teachers collecting achievement data from their students to make datadriven instructional decisions.
When examining individual word frequencies from the pre- and postlesson plans,
similarities in word usage emerge more often than differences. Both groups of lesson
plans frequently used nouns and verbs like student, class, group, teacher, use, create, and
make. These frequently used nouns and verbs suggest that the lessons are focused on
individual or small group work where students use, create, and make products to
demonstrate their learning. The words conjure the image of a student-centered classroom
104
with a constructivist or constructionist teacher guiding the learning. The consistency in
these words from pre- to postlesson plan was not surprising. Just as it would be
unexpected for an educational technology class to affect measures PK, CK, and PCK, it
was also unlikely that these types of words would change after participation in an
educational technology class as the words are core to describing teacher and student
actions in the classroom. With the measured changes in TK, TCK, TPK, and TPACK
from pre- to postlesson plan, it should be expected that the use of technology-related
words would change. However, the top three technology-related words, video, computer,
and technology were the same for both groups of lesson plans. While the graduate
students are writing the same technologies into their postlesson plans as they did in their
prelesson plans, the rubric gains suggest that the graduate students are planning to teach
with these technologies in more effective ways. Although the graduate students were
introduced to new technologies during the course of the semester, implementing new
technologies and new teaching methods simultaneously may not be the best practice.
Making changes to either the content of the lesson, the methods, or the technology makes
sense when still in the development phase of acquiring new knowledge, skills, and
abilities.
Unlike raw word counts, collocation tables list the words that commonly appear
within five words of the key word. These tables provide context to the key word and they
help to define what the key word means in context. Collocation tables show that students
were the center of attention and are constantly in action. Much of the time the students
were using some tool, including technology or a process, to complete a task. Particularly
in relation to describing student action, the verb of choice was use. Use is a generic verb
105
that has little meaning by itself, and it requires surrounding words and context to clarify
meaning. Sentences in lesson plans with use as the verb place an inappropriate focus on
tools and processes rather than the tasks to be accomplished. The following are two
examples of these types of sentences: “The students will use the internet and a Word
document to record and produce their children’s book” and “This strand requires students
to use digital media and environments to communicate and collaborate with others.”
These sentences suffer from being techno-centric and suggest that the purpose of the
activities described is simply to use the technology rather than to produce, communicate,
or collaborate. Rewritten, the sentences take on a new focus with enhanced meaning:
“The students will record and produce their children’s book with the internet and a Word
document” and “This strand requires students to communicate and collaborate with
others through digital media and environments.” The popularity of use when describing
student actions or lesson objectives may be an artifact of the techno-centric instruction
and professional development the graduate students have engaged in or may simply be a
semantic preference.
In both groups of lesson plans, the most frequently written technology-related
words were video, computer, and technology, and the collocated words suggest that
technology was integrated in promising ways by placing it in the hands of students to
create products and with explicit connections to standards, but may still be one step
removed from the classroom. Students were creating, uploading, and watching videos,
and teachers were connecting technology-related lesson objectives and student activities
with curriculum standards. The contexts suggest that teachers were implementing best
practices when integrating technology, which is an encouraging finding. However, words
106
collocated with computer suggest that teachers and students do not have access to
computers in their own classroom and must visit a school computer lab to complete
technology-infused lessons. This finding is supported by a meta-analysis by Hew and
Brush (2007) that found that a lack of resources (technology, access to technology, time,
or technical support) was the most frequent barrier to integrating educational technology.
Exploring the network of words through a co-occurrence network diagram
provided a visual display that corroborates the findings from raw word counts and
collocation statistics: students and student learning were at the center of lesson plans
surrounded by clusters of activities. Care should be taken in drawing definitive
conclusions from the patterns of the co-occurrence network. The lesson plan corpus was
relatively small, so communities in the network tended to be from a single lesson plan.
This is most evident in the precourse lesson plan network. With a larger lesson plan
corpus, the co-occurrence network may be able to form communities from across lesson
plans so that general trends within the corpus could emerge.
How do students with higher levels of TPACK differ from those with lower
levels of TPACK? Answering this question aids in building characteristic profiles of
students who enter an educational technology course with higher ability in the seven
TPACK constructs and those students who demonstrate the most growth. For an
instructor, the knowledge of a student’s age, gender, hours of professional technology
use, and self-reported knowledge of pedagogy, content, and educational technology could
aid in adjusting the course to the strengths and weaknesses of students.
Before the course, female graduate students reported initial higher ability in PK
and CK, while males reported higher ability in TK and TPK on the survey, and while
107
females demonstrated higher levels of TK, TPK, TCK, and TPACK on their initial lesson
plans. Following the course, there were no significant differences between the genders.
These observed gender differences related to TPACK constructs were likely unique to
this sample, but it appears that participation in an educational technology course can
close gender gaps in TPACK constructs.
Younger graduate students (21 to 29 years old) are likely to enter a course with
higher reported ability in TK than older graduate students (50 to 59 years old) and this
difference persists even after participation in the course. This finding reinforces the
stereotype that older teachers are less adept at using technology. Additionally, graduate
students with more teaching experience reported higher levels of PK and demonstrated
higher levels of TK. Some consideration of age and teaching experience should be taken
when developing a course for graduate students, but the two characteristics do not have
an impact on the constructs that would be included in a typical educational technology
course like TCK, TPK, and TPACK.
Upon completion of the course, graduate students who describe their knowledge
of pedagogy as intermediate or advanced had higher reported PK, CK, and PCK ability
than those who describe their knowledge as beginner. Graduate student who described
their knowledge of educational technology as intermediate or advanced have higher
reported ability than beginners in CK, TK, TCK, and TPACK. Various reported levels of
knowledge of a content area did not have any relationship with TPACK constructs.
Levels in knowledge of pedagogy, content, and educational technology were also
related to the reported growth a student experiences in the seven TPACK constructs.
Graduate students with advance knowledge of pedagogy have higher reported gains in
108
CK and PCK than pedagogy beginners. Students with advance knowledge of content
have higher reported gains in PCK than beginners. Students with intermediate knowledge
of education technology have higher gains in PK, CK, TCK, and TPK than educational
technology beginners. If a graduate-level course instructor needs to get a feel for the
TPACK abilities among a group of students for the purposes of tailoring the course, then
asking students to provide their level of knowledge of pedagogy, content, and educational
technology would be a simple way to gather that information.
The number of credit hours taken in pedagogy, content, and technology related
courses were related to reported levels of TPACK constructs, but not related to
demonstrated levels of TPACK from lesson plans. Students with more credit hours in
pedagogy reported higher levels of PCK and TPACK. Students with more credit hours in
content reported higher levels of PK and TPACK. Students with more credit hours in
technology reported higher levels of PK, CK, PCK, and TPACK. The number of hours
per day spent using technology for professional purposes was a significant predictor for
all reported levels of all TPACK constructs, but not for any demonstrated TPACK
constructs in lesson plans.
Graduate students who participated in the interview group that tended to have
higher TPACK scores were those that had more teaching experience in permanent
positions and were able to elaborate on their pedagogical practices, citing specific
frameworks that they used to structure lessons. They also participated in professional
development, enrolled in formal courses, sought out curricular resources, and
collaborated with colleagues to improve their practice. The students also had a long
standing affinity for their subject area along and could describe instances where they saw
109
effective uses of technology in their subject area and could describe their own teaching
with technology and rationale for using technology. Finally, the students interviewed had
a positive, yet nuanced attitude about technology’s potential impact on teaching. They
thought that technology had its place in certain contexts but was not a panacea for all
learning problems. Unfortunately, students cited increased engagement as the primary
benefit for teaching with technology. This is problematic because engagement is not well
defined for teachers, it is difficult to measure, and it is not directly connected to learning
standards and objectives. Teacher educators need to help teachers discuss the benefits of
technology in concrete terms that describe student outcomes. Overall, the interviews
helped to uncover the qualitative differences among participants that contributed to their
measured TPACK ability.
What teaching and learning experiences influenced students’ knowledge of
and practice in teaching? The graduate students selected for interviews followed a
largely traditional path to teaching by completing an undergraduate teacher preparation
program. The courses, instructors, textbooks, and their mentor teachers in their student
teaching experience were the first big influence on their teaching. In their positions as full
or part-time teachers, they were influenced by internet resources, curricular resources
they sought out for their classes, and the standards outlined for their subject area.
Additional significant influences on their teaching were district or school professional
development, standardized curriculum adopted by their district or school, and their
colleagues. As all the students were currently enrolled in a graduate program, they cited
their graduate courses as recent influences on their teaching. This finding in particular
110
contributes evidence that participation in an educational technology course influenced
knowledge and ability in TPACK.
Limitations
As with any study, this research is not without limitations. Both the treatment
group and control group were selected as intact classroom and therefore may not be
representative of the population of graduate students who enroll in an educational
technology course. It is therefore important not to generalize the results of this study to
dissimilar populations. Furthermore, the interview participants were selected based on
their presurvey score, but not all those who were selected agreed to participate. This selfselection could have impacted the conclusions drawn from the interviews because the
interview participants could be reliably different from the other participants in the study.
The measurement of TPACK relied on reliable instruments from peer-reviewed
studies; however, the measurement of TPACK is an emerging field and these instruments
were not cross-validated. Construct validity could be compromised by the fuzzy
definitions of constructs in the TPACK framework and by the measures based on those
definitions. Results suggested that neither the survey nor the rubric adequately
discriminated among the TPACK constructs. Furthermore, survey research relies on selfreport and is susceptible to presentation bias. Similarly, lesson plans scored with a rubric
are a suitable proxy for teacher practice, but do not replace observations of teacher
behavior in the classroom. Therefore, the measures may not represent the graduate
students’ true knowledge, ability, and practice. Multiple raters were used to score lesson
plans to enhance the internal validity and reliability of the lesson plan scores, however
111
the two groups of raters could have been reliably different impacting the scores on the
group of precourse lessons and postcourse lessons.
Because this was a field study where the researcher has less control than in a
laboratory-based study, participant attrition was a limitation as some participants did not
respond to or provide artifacts for one or more measures in the study. The students’ nonparticipation was likely not random; therefore, their absence could contribute to the
observed results. Participant attrition may have also affected statistical power,
particularly in relation to the analyses of lesson plan rubric scores. Although statistical
tests that compensate for the potential for Type I error were used (e.g., Dunnett’s test),
this study could have suffered from an error rate problem due to the large number of
statistical tests performed.
Implications
The findings in this study primarily have implications for graduate teacher
preparation program educational technology course offerings and how educational
technology course instructors adjust instruction based on their knowledge of students.
Secondarily, the findings have implications for the field of TPACK measurement and
research.
This study suggests that graduate students who are current teachers have the skills
and ability necessary for teaching in traditional contexts. They do not, however, have the
skills and ability to teach effectively in technology-rich environments. This study also
suggests that stand-alone educational technology courses are effective in facilitating
growth in TPACK constructs. If teacher education programs agree that teaching teachers
112
to teach effectively with technology is an important program goal, educational
technology courses should continue to be offered.
Although the group of participants in this study may not be representative of the
population of graduate students who enroll in educational technology courses, it is safe to
assume that graduate students who do enroll in these courses have a broad range of
knowledge, skills, and experience. It is important for an instructor to know this
information about their students so that appropriate adjustments in the course can be
made to meet student needs. It is impractical for instructors to run their students through a
battery of measures to determine knowledge, skills, and ability relevant to the course and
then analyze the results. This study identified student characteristics that have significant
relationships with TPACK construct levels.
TPACK is an incredibly fertile area of exploration in educational technology and
teacher education. Since the introduction of the framework less than a decade ago,
hundreds of articles, proceedings papers, presentations, and dissertations explored some
facet of TPACK (Koehler, 2013). Koehler et al. (2011) identified 66 journal articles,
conference proceedings, dissertations, and a presentation that gathered data on TPACK in
a systematic manner. The number falls to 24 by narrowing the focus to empirical studies
that appeared in peer-reviewed journals and investigated teachers’ TPACK prior to 2012,
with the bulk published in 2010 and 2011 (Wu, 2013). This series of decreasing numbers
represents, at the same time, the enormous popularity of TPACK and the complexities in
measuring the construct. The knowledge required to teach effectively with technology is
complex and the framework that describes that knowledge is complex, so it follows that
measuring that knowledge through the TPACK lens is also complex. This complexity is
113
not only manifested in the number of publications, but also the lack of evidence of
reliability and validity reported. Koehler et al. (2011) found that 69% of the studies they
analyzed did not provide any evidence of reliability, and 90% of the studies provided no
evidence of validity for the measurement methods use.
This current study provided evidence of internal consistency and test/retest
reliability for both the survey and the rubric. This study also provided evidence for
discriminant and convergent validity within and between the two instruments. While the
instruments demonstrated acceptable levels of reliability, they did not demonstrate
adequate convergent or discriminant validity. The evidence from this study and the lack
of evidence from other studies regarding the validity of TPACK instruments point to the
necessity for the TPACK research community to spend significant time and resources on
validation studies. Because 88% of TPACK studies evaluated by Koehler et al. (2011)
measured TPACK with more than one instrument, cross-validation studies are
particularly important.
Recommendations
At minimum, a single standalone educational technology course designed to teach
general technological pedagogies (TPK) should be offered. This is the traditional
educational technology course model. A more ideal situation would be a sequence of
three courses, each a prerequisite for the following course. The first course would focus
on building technology knowledge (TK) and skills—in essence, a digital literacy course
designed for teachers. The second course would focus on developing general
technological pedagogies (TPK) similar to most extant educational technology courses.
The third course would focus on developing content-specific technological pedagogy
114
(TPACK) and content-specific technology knowledge and skills (TCK). Given the
current environment in which general educational technology course are being eliminated
from undergraduate teacher education programs, this three course offering is unlikely.
Perhaps this type of course sequence could be more easily built into an educational
technology master’s or certificate program.
Instructors can infer student TPACK levels by gathering common demographic
information (age, gender, years of experience), self-reported knowledge of pedagogy,
content, and educational technology, and hours of technology use for professional
purposes. For example, a instructor could find that a group of students was mostly older
women with more than 6 years of teaching experience, high reported knowledge in
pedagogy and content, low reported knowledge in technology, and few hours of
professional technology use. Based on this student information and the results of this
study, the instructor could infer that the group would be strong in PK, CK, and PCK, but
may struggle in TK, TCK, TPK, and TPACK. With this data, the instructor could then
adjust the course accordingly.
Future Research Directions
Although many reports on TPACK and its relation to teacher education have been
written, presented, and discussed in recent years, there is still much research to be done.
TPACK, although based on an established theoretical framework, is still in its infancy
given that less than a decade has passed since its introduction. As Graham (2011) clearly
described, TPACK has experienced little theoretical development resulting in various
construct definitions and opaque boundaries between those constructs. He suggest that
researchers address the issues through investigations of the elements of the theory
115
(factors, constructs or concepts), how the elements are related, and why the elements and
relationships among them are important in the context of teaching with technology
(Graham, 2011). The primary obstacles to overcome in identifying the elements of the
theory include the clarity of TPACK’s foundational theory (PCK), the complexity of the
TPACK framework, and the imprecise definitions for elements in the framework
(Graham, 2011). Coming to a consensus on whether TPACK (the core construct) is a
mixture of all the elements or a synthesis of those elements as well as defining
boundaries between elements are fundamental to investigating the relationships between
the elements. Articulating the importance of TPACK involves describing the value it
adds over and above PCK and how TPACK contributes to the study of teaching with
technology (Graham, 2011).
TPACK instrument development and validation studies along with investigations
into how preservice and inservice teachers (including graduate students) best learn to
teach with technology are two types of studies that may address the issues with TPACK
raised by Graham (2011). A large-scale instrument development and validation study
could investigate the first two issues raised, the elements of the framework, and how they
are related. The TPACK framework, as originally conceived, consists of seven
constructs, yet outside of self-report survey instruments, few instruments measure all
seven constructs. With the development of new or modified lesson plan rubric and
classroom observation protocols, an instrument validation study that incorporates a selfreport survey, a rubric, and a classroom observation protocol that each measure all seven
TPACK constructs could be conducted. The study would require a large sample size of
preservice and/or inservice teachers to conduct a confirmatory factor analysis that would
116
provide evidence of construct, convergent, and discriminant validity. Further evidence of
convergent validity could come from significant correlations between the same construct
subscales on each of the three instruments. Additional discriminant validity evidence
could be demonstrated by low and nonsignificant correlations among construct subscales
within an instrument.
To address the issue of why the elements of TPACK and their relationships are of
importance to teaching with technology, an investigation into how preservice and
inservice teachers best learn to teach could be conducted. An ongoing debate exists
within teacher education related to how best to produce teachers who use technology in
content-specific and pedagogically effective ways. The literature reports a variety of
strategies, but the results are conflicting and poorly evaluated (Kay, 2006; Mims, Polly,
Shepherd, & Inan, 2006). For inservice teachers, the most popular strategy is one in
which no specific technology class is offered and technology is integrated into each
teacher education program course (Kay, 2006). The second most popular strategy is to
offer a single educational technology course that focuses on technological skills (Kay,
2006). Training of inservice teachers is not well documented. However, Robinson (2002)
found that preservice and inservice teachers differed in their effectiveness perception
ratings of eight different training methods. This suggests that the method most effective
for one teacher group may not be the most effective for another teacher group.
Although the literature reviews many types of methods to teach preservice and
inservice teachers how to teach effectively with technology, there is a lack of evidence
that suggests what method is the most effective. Within the contexts of TPACK and
cognitive load, this problem is defined by the path choices available and the cognitive
117
load associated with each of the path choices. These choices include delivering
instruction focused on: (a) TPK first, followed by TPACK, (b) PCK followed by
TPACK, or (c) only TPACK. Graham (2011) stated that the optimal path may be
determined by teacher type and he suggested several hypotheses. One hypothesis was that
it may be “more effective to learn content-specific pedagogies and supporting
technologies simultaneously” (Graham, 2011, p. 1959), which would be a TPACK-only
option. Another hypothesis was that preservice teachers would benefit from a TPK to
TPACK path due to the cognitive load associated with learning new technologies and
content pedagogies simultaneously (Graham, 2011). Finally, he stated that inservice
teachers may benefit from the PCK to TPACK path due to their prior knowledge of
content pedagogies (Graham, 2011).
It would be impractical to develop three different courses, course sequences, or
professional development workshops to test these paths, at least in the initial iteration of
the study. A pilot study could run participants through three computer-based modules in a
lab environment. The three computer-based modules would be differentiated by the path
the modules take to reach the terminal learning goal. Pre- and posttests would be
developed to measure the module objectives. The generalizability of the study would be
limited in that it would not faithfully recreate the contexts in which preservice and
inservice teachers are taught to teach with technology; however, the results of the study
could lead to a more ambitious field-based study.
Conclusion
As educational technology continues to find its way into classrooms, it is
important that teachers know how to teach effectively with technology. The expectations
118
placed on technology to affect change in education are driven, in part, by the significant
investment schools have made. Despite evidence that suggests that teaching with
technology facilitates learning, the expectations continue to outstrip the results. A
possible explanation for technology’s small but significant effect on learning is
ineffective implementation by teachers unfamiliar in teaching with technology. Teacher
education programs have sought various means to improve teachers’ technology skills,
and one framework for doing so is TPACK.
This study’s mixed-methods research design helped to provide a fuller
understanding of TPACK development and change over time in graduate teacher
education students; an important, but underrepresented teacher group in education
research. Both quantitative and qualitative results showed that students are relatively
weak in teaching with technology, but that their perceived and demonstrated ability in
teaching with technology can be improved. Results also showed that TPACK ability
levels vary significantly by student characteristics and that students are influenced in their
development by a mentors, colleagues and curricular resources.
The findings from this study could be used to guide teacher preparation programs
and researchers interested in measuring and developing TPACK. Teacher preparation
programs should evaluate how they prepare their teachers to teach with technology and
use the results of this study as a jumping-off point to find more effective and efficient
ways to prepare their teachers. TPACK research continues to grow while still in its early
phases of investigation. TPACK researchers should recognize that tight construct
definitions and valid and reliable instruments are required if TPACK is to have a
meaningful impact on teacher education. Further investigations of TPACK’s elemental
119
relationships, the importance of those relationships and the impact of those elements and
relationships on teacher education are needed.
120
REFERENCES
Akcaoglu, M., Kereluik, K., & Casperson, G. (2011). Refining TPACK rubric through
online lesson plans. Society for Information Technology & Teacher Education
International Conference 2011, 2011(1), 4260–4264.
Archambault, L. M., & Barnett, J. H. (2010). Revisiting technological pedagogical
content knowledge: Exploring the TPACK framework. Computers & Education,
55(4), 1656–1662. doi:10.1016/j.compedu.2010.07.009
Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 online distance
educators in the United States. Contemporary Issues in Technology and Teacher
Education, 9(1), 71–88.
Aud, S., Hussar, W., Kena, G., Bianco, K., Frohlich, L., Kemp, J., & Tahan, K. (2011).
The condition ofeEducation 2011 (No. NCES 2011-033). Washington, DC: U.S.
Government Printing Office: U.S. Department of Education, National Center for
Education Statistics. Retrieved from http://nces.ed.gov/pubsearch/
pubsinfo.asp?pubid=2011033
Baxter, J. A., & Lederman, N. G. (1999). Assessment and measurement of pedagogical
content knowledge. In N. G. Lederman & J. Gess-Newsome (Eds.), Examining
pedagogical content knowledge: The construct and its implications for science
education, (pp. 3–16). Science & Technology Education Library London: Kluwer
Academic.
Bloom, B. S. (1968). Learning for Mastery. Regional Education Laboratory for the
Carolinas and Virginia, Mutual Plaza (Chapel Hill and Duke Sts.), Durham, N.C.
27701. Retrieved from http://www.eric.ed.gov/ERICWebPortal/contentdelivery/
servlet/ERICServlet?accno=ED053419
Brantley-Dias, L., Kinuthia, W., Shoffner, M. B., de Castro, C., & Rigole, N. J. (2007).
Developing pedagogical technology integration content knowledge in preservice
teachers: A case study approach. Journal of Computing in Teacher Education,
23(4), 143–150.
Brown, D. (2006). Can instructional technology enhance the way we teach students and
teachers? Journal of Computing in Higher Education, 17(2), 121–142.
Browne, J. (2009). Assessing pre-service teacher attitudes and skills with the technology
integration confidence scale. Computers in the Schools, 26(1), 4–20.
Caspi, A., Roberts, B. W., & Shiner, R. L. (2005). Personality development: stability and
change. Annual Review of Psychology, 56, 453–84.
doi:10.1146/annurev.psych.55.090902.141913
121
Center for Education Policy. (2010). Teacher employment patterns and student results in
Charlotte Mecklenburg schools. Cambridge, MA.
Christensen, R. W., & Knezek, G. A. (2009). Construct validity for the teachers’ attitudes
toward computers questionnaire. Journal of Computing in Teacher Education,
25(4), 143–155.
Clauset, A., Newman, M. E., & Moore, C. (2004). Finding community structure in very
large networks. Physical review E, 70(6), 066111.
Cochran-Smith, M., & Lytle, S. L. (1999). Relationships of knowledge and practice:
Teacher learning in communities. Review of research in education, 24, 249-305.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. London:
Psychology Press.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues
for field settings. Boston, MA: Houghton Mifflin.
Cox, S. (2008). A conceptual analysis of technological pedagogical content knowledge
(Doctoral dissertation). Retrieved from ProQuest Dissertations & Theses.
Creswell, J., & Plano Clark, V. L. (2011). Designing and conducting mixed methods
research (2nd ed.). Los Angeles, CA: SAGE Publications.
Curts, J., Tanguma, J., & Pena, C. M. (2008). Predictors of Hispanic school teachers’ selfefficacy in the pedagogical uses of technology. Computers in the Schools, 25(1–
2), 48–63.
Dawson, K., Ritzhaupt, D. A., Liu, F., Rodriguez, M. P., & Frey, A. C. (in press). Using
TPCK as a lens to study the practices of math and science teachers involved in a
year-long technology integration initiative. Journal of Computers in Mathematics
and Science Teaching.
Driscoll, M. P. (2004). Psychology of Learning for Instruction (3rd ed.). Boston, MA:
Allyn & Bacon.
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible
statistical power analysis program for the social, behavioral, and biomedical
sciences. Behavior Research Methods, 39, 175–191.
Franklin, C. (2007). Factors that influence elementary teachers use of computers. Journal
of Technology and Teacher Education, 15(2), 267–293.
Gess-Newsome, J. (1999). Pedagogical content knowledge: An introduction and
122
orientation. In N. G. Lederman & J. Gess-Newsome (Eds.), Examining
pedagogical content knowledge: The construct and its implications for science
education (pp. 3–16). Science & Technology Education Library. London: Kluwer
Academic.
Graham, C. R. (2011). Theoretical considerations for understanding technological
pedagogical content knowledge (TPACK). Computers & Education, 57(3), 1953–
1960. doi:16/j.compedu.2011.04.010
Graham, C. R., Burgoyne, N., Cantrell, P., Smith, L., St. Clair, L., & Harris, R. (2009).
TPACK development in science teaching: Measuring the TPACK confidence of
inservice science teachers. TechTrends, 53(5), 70–79. doi:10.1007/s11528-0090328-0
Grossman, P. L. (1990). The making of a teacher: Teacher knowledge and teacher
education. Professional development and practice series. New York, NY: Teachers
College Press, Teachers College, Columbia University.
Groth, R., Spickler, D., Bergner, J., & Bardzell, M. (2009). A qualitative approach to
assessing technological pedagogical content knowledge. Contemporary Issues in
Technology and Teacher Education, 9(4), 392–411.
Harris, J. B., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology
integration assessment rubric. Society for Information Technology & Teacher
Education International Conference 2010, 2010(1), 3833–3840.
Harris, J. B., Mishra, P., & Koehler, M. (2009). Teachers’ technological pedagogical
content knowledge and learning activity types: Curriculum-based technology
integration reframed. Journal of Research on Technology in Education, 41(4),
393–416.
Hew, K., & Brush, T. (2007). Integrating technology into K-12 teaching and learning:
current knowledge gaps and recommendations for future research. Educational
Technology Research & Development, 55(3), 223-252.
Higuchi, K. (2012). KH coder. Kyoto, Japan: Ritsumeikan University. Retrieved from
http://khc.sourceforge.net/en/
Hofer, M., & Grandgenett, N. (2012). TPACK development in teacher education: A
longitudinal study of preservice Teachers in a secondary M.A.Ed. program.
Journal of Research on Technology in Education, 45(1), 83–106.
Hogarty, K. Y., Lang, T. R., & Kromrey, J. D. (2003). Another look at technology use in
classrooms: The development and validation of an instrument to measure
teachers’ perceptions. Educational and Psychological Measurement, 63(1), 139123
162.
International Society for Technology in Education. (2000). ISTE national educational
technology standards and performance indicators for teachers. Washington, DC:
International Society for Technology in Education.
Ivankova, N. V., Creswell, J. W., & Stick, S. L. (2006). Using mixed-methods sequential
explanatory design: From theory to practice. Field Methods, 18(1), 3–20.
doi:10.1177/1525822X05282260
Kay, R. (2006). Evaluating strategies used to incorporate technology into preservice
education: A review of the literature. Journal of Research on Technology in
Education, 38(4), 383–408.
Keenan, T., & Evans, S. (2009). The Principles of Developmental Psychology. An
introduction to child development (pp. 3–20). London: SAGE Publications.
Kereluik, K., Casperson, G., & Akcaoglu, M. (2010). Coding pre-service teacher lesson
plans for TPACK. Society for Information Technology & Teacher Education
International Conference 2010, 2010(1), 3889–3891.
Kline, P. (2000). The handbook of psychological testing. Psychology Press.
Koehler, M. J. (2013). TPACK bibliography. TPACK.org. Retrieved May 26, 2013, from
http://www.matt-koehler.com/tpack/tpack-bibliography/
Koehler, M. J., Shin, T. S., & Mishra, P. (2011). How do we measure TPACK? Let me
count the ways. In R. N. Ronau, C. R. Rakes, & M. L. Niess (Eds.), Educational
technology, teacher knowledge, and classroom impact: A research handbook on
frameworks and approaches (pp. 16–31). Hershey, PA: Information Science
Reference.
Kramarski, B., & Michalsky, T. (2010). Preparing preservice teachers for self-regulated
learning in the context of technological pedagogical content knowledge. Learning
and Instruction, 20(5), 434–447. doi:10.1016/j.learninstruc.2009.05.003
Kuhn, T. S. (1970). The structure of scientific revolutions. Chicago, IL: The University of
Chicago Press.
Lemke, C., Coughlin, E., & Reifsneider, D. (2009). Technology in schools: What the
research says: An update. Culver City, CA: Metiri Group commissioned by Cisco.
Lewins, A., & Silver, C. (2007). Using software in qualitative research: A step-by-step
guide. Thousand Oaks, CA: Sage Publications Ltd.
124
MacDonald, R. (2009). Supporting learner-centered ICT integration: The influence of
collaborative and needs-based professional development. Journal of Technology
and Teacher Education, 17(3), 315-348.
Machado, C., Laverick, D., & Smith, J. (2011). Influence of graduate coursework on
teachers’ technological, pedagogical, and content knowledge (TPACK) skill
development: An exploratory study. Society for Information Technology &
Teacher Education International Conference 2011, 2011(1), 4402–4407.
Mims, C., Polly, D., Shepherd, C., & Inan, F. (2006). Examining PT3 projects designed
to improve preservice education. TechTrends, 50(3), 16–24.
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A
framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.
Pennebaker, J. W., Booth, R. J., & Francis, M. E. (2007). LIWC: Linguistic inquiry and
word count. Austin, TX: University of Texas Austin. Retrieved from
http://www.liwc.net/
Reiser, R. A. (2001). A history of instructional design and technology: Part I: A history of
instructional media. Educational Technology Research and Development, 49(1),
53–64. doi:10.1007/BF02504506
Robinson, L. (2002) Perceptions of preservice educators, inservice educators, and
professional development personnel regarding effective methods for learning
technology integration skills. A dissertation presented to University of North
Texas, Denton, TX.
Roblyer, M. D., & Doering, A. H. (2009). Integrating educational technology into
teaching. Pearson, Boston.
Romesburg, H.C. (1984). Cluster analysis for researchers. Lifetime Learning
Publications, Belmont, CA.
Schacter, D. L. (1999). The seven sins of memory: Insights from psychology and
cognitive neuroscience. American psychologist, 54(3), 182.
Schmidt, D., Baran, E., Thompson, A., Koehler, M., Punya, M., & Shin, T. (2009a).
Examining preservice teachers’ development of technological pedagogical content
knowledge in an introductory instructional technology Course. Society for
Information Technology & Teacher Education International Conference 2009,
2009(1), 4145–4151.
Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S.
(2009b). Technological Pedagogical Content Knowledge (TPACK): The
125
development and validation of an assessment instrument for preservice teachers.
Journal of Research on Technology in Education, 42(2), 123–149.
Schwarz, N., & Sudman, S. (1994). Autobiographical memory and the validity of
retrospective reports. New York, NY: Springer-Verlag.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater
reliability. Psychological Bulletin, 86(2), 420–8. Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/18839484
Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching.
Educational Researcher, 15(2), 4–14.
Tashakkori, A., & Creswell, J. W. (2007). Exploring the nature of research questions in
mixed methods research. Journal of Mixed Methods Research, 1(3), 207–211.
doi:10.1177/1558689807302814
Tashakkori, A., & Teddlie, C. (2003). Handbook of mixed methods in social &
behavioralrResearch. Thousand Oaks, CA: SAGE Publications.
United States Department of Education. (n.d.). Preparing tomorrow’s teachers to use
technology program (pt3). Retrieved from http://www2.ed.gov/programs/
teachtech/index.html
United States Department of Education. (2011). Licensing and certification requirements.
Retrieved October 14, 2011, from http://teach.gov/become-teacher/licensing-andcertification/
Wellings, J., & Levine, M. H. (2009). The digital promise: Transforming learning with
innovative uses of technology. New York, NY: Joan Ganz Cooney Center at
Sesame Workshop. Retrieved from http://www.joanganzcooneycenter.org/
Reports-26.html
Wu, Y.T. (2013). Research trends in technological pedagogical content knowledge
(TPACK) research: A review of empirical studies published in selected journals
from 2002 to 2011. British Journal of Educational Technology, 44(3), E73–E76.
doi:10.1111/j.1467-8535.2012.01349.x
126
APPENDIX A
TPACK SURVEY
127
The following survey items are intended to gather information about your background
and experience/knowledge regarding the use of technology in the classroom. Please select
the response that best describes you.
1. What is your name? (PLEASE NOTE: ALL responses will remain strictly confidential
and will be used to evaluate EDT 530)
2. Are you male or female?
• Male
• Female
3. Please provide your age.
4. What grade level(s) do you teach? (Select all that apply).
What grade level(s) do you teach? (Select all that apply).
• Elementary (K-5)
• Middle (6-8)
• High School (9-12)
• Higher Education
• I am not currently teaching.
5. Including this school year, how many years of experience do you have as a K-12
teacher? (Include years spent teaching both full and part time, in both public and private
schools.)
6. If you are a current or former classroom teacher, how would you classify your content
area? (Please select all that apply). If you are not a teacher, please select the content
area(s) with which you most closely identify.
• Math
• Science
• English/Language Arts
• Social Studies
• Foreign Language
• English Language Learning
• Art/Music
• Electives
• Other (please specify)
7. What type of educational degree do you hold? Please select only the degree(s) you
hold that are education related. Select all that apply.
• Bachelor's
• Master's
• Ph.D
• Ed.D
128
•
None
8. What degree are you currently pursuing?
• Bachelor's
• Certificate
• Master's
• Ed.D
• Ph.D
9. In what program are you currently enrolled? Please select from the drop-down menu
below.
10. How many credit hours have you completed specific to your content area (e.g., math,
science, social studies, elementary) when completing your education-related degree(s)?
Please type the number of credit hours.
11. How many credit hours have you completed related to technology (both general
technology skills and educational technology/technology integration) while completing
your education-related degree(s)? Please type the number of credit hours.
12. How many credit hours have you completed related to general or content-specific
pedagogy while completing your education-related degree(s)? Please type the number of
credit hours.
13. How many hours per day do you spend using technology for personal purposes?
Please type the number of hours per day.
14. How many hours per day do you spend using technology for professional purposes?
Please type the number of hours per day.
15. How many hours of reading, researching or learning about educational topics do you
spend per month that is not related to academic classes or in-service training? Please type
the number of hours per month.
16. Overall, how would you describe your knowledge of educational technology ?
• Beginning
• Intermediate
• Advanced
• Expert
17. Overall, how would you describe your knowledge of pedagogy?
• Beginning
• Intermediate
• Advanced
• Expert
129
18. Overall, how would you describe your knowledge of your content area?
• Beginning
• Intermediate
• Advanced
• Expert
19. For each of the statements below, please indicate your level of abilty in the following
areas. Please indicate whether your ability is poor, fair, good, very good, or excellent for
each statement.
Poor
Fair
Good Very Good
Excellent
a. My ability to troubleshoot technical problems associated with hardware (e.g.,
network connections).
b. My ability to create materials that map to specific district/state standards.
c. My ability to use a variety of teaching strategies to relate various concepts to
students.
d. My ability to decide on the scope of concepts taught within in my class.
e. My ability to use the results of technology-based assessments to modify
instruction.
f. My ability to distinguish between correct and incorrect problem solving attempts
by students
g. My ability to address various computer issues related to software (e.g.,
downloading appropriate plug-ins, installing programs).
h. My ability to use technology to help students build new knowledge and skills.
i. My ability to anticipate likely student misconceptions within a particular topic
j. My ability to determine a particular strategy best suited to teach a specific
concept.
k. My ability to use technology to predict students' skill/understanding of a
particular topic.
l. My ability to implement various technology to support different teaching methods
(i.e., inquiry based instruction, direct instruction, cooperative learning, etc.).
m. My ability to plan the sequence of concepts taught within my class.
n. My ability to moderate web-based student interactivity on discussion boards,
blogs, social networking sites, learning management systems, etc.
o. My ability to use technological representations (i.e. multimedia, visual
demonstrations, etc) to demonstrate specific concepts in my content area.
p. My ability to encourage interactivity among students using technology including
Web 2.0 tools (i.e., blogs, social networking sites, learning management systems,
etc.).
q. My ability to assist students with troubleshooting technical problems with
computers.
r. My ability to adjust teaching methodology based on student
performance/feedback.
130
s. My ability to comfortably produce lesson plans with an appreciation for the topic.
t. My ability to implement district curriculum in a technology-rich environment.
u. My ability to assist students in noticing connections between various concepts in a
curriculum.
v. My ability to use various web-based courseware/applications to support
instruction.
w. My ability to use technology to create effective representations of content that
depart from textbook knowledge.
x. My ability to meet the overall demands of teaching effectively in the 21st century.
131
APPENDIX B
TPACK RUBRIC
132
Directions
Thank you for agreeing to help me score these lesson plans. You have been given lesson
plans to score and each of those lesson plans was assigned a participant ID. You will
score each lesson plan using a rubric created in SurveyMonkey for ease of data
collection.
1. Read the TPACK primer to familiarize yourself with the general framework and
each of the seven constructs. Please refer back to the primer as needed while you
are scoring the lesson plans.
2. Click the SurveyMonkey link.
3. Enter the participant ID associated with the lesson plan you intend to score.
4. Complete each of the items by selecting the appropriate choice or choices or by
entering text.
5. Click Submit.
6. Complete step 2 through 5 until you have scored all the assigned lesson plans.
TPACK primer/definitions
Please read the TPACK explanation of the seven components of the TPACK framework
here.
TPACK Rubric (Akcaoglu, Kereluik, & Casperson, 2011)
Construct
Content
Pedagogy
Item
•
•
•
•
•
Provides Clear Lesson Objectives
Evidence of content knowledge
Assessments are aligned with the
objectives and instructional strategies
Lesson organizes and manages student
behavior – Explains sequence of events
and procedures for students
Content is meaningful and relevant to
133
Scale
Poor to
Excellent
Poor to
Excellent
Technology
•
•
•
Pedagogical
Content
Knowledge
•
•
•
Technological
Pedagogical
Knowledge
•
•
•
Technological
Content
Knowledge
•
Technological
Pedagogical
Content
Knowledge
•
•
•
students.
Lesson plan incorporates technology
Provides rationale for technology choice
Demonstrates understanding of
technology
Selects effective teaching strategies
appropriate to subject domain to guide
student thinking and learning
Demonstrates awareness of possible
student misconceptions
Presents appropriate strategies for
developing understanding of the subject
content
Chooses technologies enhancing
approaches (teacher centered
approaches) --Uses technology to
present material
Chooses technologies enhancing student
learning (student centered approaches) -Students use technology to explore
content and achieve learning goals
Provides clear rationale for technology
choice to deliver instruction
Chooses appropriate technologies for
subject domain (mathematics, science)
Link between technology and content is
obvious or explicit
Appropriately uses content, pedagogy,
and technology strategies in concert
Technology enhances content objectives
and instructional strategies and
instructional strategies
134
Poor to
Excellent
Poor to
Excellent
Poor to
Excellent
Poor to
Excellent
Poor to
Excellent
APPENDIX C
INTERVIEW SCRIPT
135
Introduction:
We’re talking to graduate students in Teachers College about their experiences with teaching
with technology and learning to teach with technology.
Just so you know, your answers and our conversation will be totally confidential. We won’t be
reporting names or any other identifiers with our findings.
Interview Questions:
1. What grade and subject do you teach?
2. How many years of teaching experience do you have?
3. Can you tell me what a typical lesson looks like in your classroom (50-75 mins. Lesson)?
a. What influenced you on how you set up or plan a lesson (class, book, article, PD,
observation, experience)?
4. What are some of the most effective teaching methods (general) that you use?
a. Where did you learn about those teaching methods?
b. How did you discover that they were effective?
5. What were some of the influences or resources that helped you gain your knowledge in
your specific content/subject area?
a. How were you drawn to this content/subject area?
6. Outside of teaching, do you use technology in your personal and work life?
a. How did you learn to use those technologies?
b. Do you enjoy using technology in your personal and work life?
7. What are your opinions about the use of technology in teaching?
a. Can you tell me a little bit more about that?
b. Can you tell me how you developed that thought/opinion?
8. Describe an early instance where you saw an effective use of technology in teaching.
a. What did you think after you saw that use of technology?
9. Describe the first time you taught with technology.
a. How did you decide to use technology in your teaching?
b. How did you decide on that specific technology to try out first?
c. How did you choose that specific lesson to integrate technology?
10. What has influenced the use of technology in your teaching?
a. Can you give me some examples?
136
APPENDIX D
IRB APPROVAL
137
138