Quantity, Quality, and Recency of Majors` Course Work

Major Field Test
Running head: MAJOR FIELD TEST SCORES AND PSYCHOLOGY COURSE WORK
Major Field Test Scores Related to Quantity, Quality and Recency of
Psychology Majors’ Course Work at Two Universities
Robert E. Till
Northern Arizona University
1
Major Field Test
2
Abstract
Senior psychology majors at two universities took the Major Field Test in Psychology (MFT)
offered by Educational Testing Service. Testing was required as part of program assessment
efforts. Total MFT scores, as well as sub-area scores, were examined in relation to quantity,
quality, and recency of students’ course work in psychology. Quantity was defined as number of
courses taken in a psychology sub-area. Quality was defined as mean grade point average in
those courses. Recency was defined as time since the most recent course was taken in the subarea. For all MFT sub-areas, test scores were correlated with grades in relevant courses.
Quantity and recency of course work were related to MFT performance for some sub-areas more
than others (e.g., strongest for Perception-Physiological sub-area).
Major Field Test
3
Major Field Test Scores Related to Quantity, Quality and Recency of
Psychology Majors’ Course Work at Two Universities
The assessment of educational outcomes has become commonplace. The terms, and the task
itself, have become familiar, whether or not effective techniques of assessment have been put in
place. Psychology, over the past two decades, has joined in these efforts, although sometimes
only reluctantly.
Even by 1988, a majority of states had begun to require assessment (Blumenstyk, 1988;
Halpern, 1988) and institutions of higher education were coming under increasing pressure to
demonstrate their effectiveness in providing a quality education for their students. Given
psychology’s history and its fondness for measurement, one might have expected a sort of
vanguard effort by the field to address the assessment concern at multiple levels. But such
efforts were slow to develop, especially at the psychology department level. In a national survey
of undergraduate psychology programs, Jackson and Griggs (1995) reported that only about onethird of the departments were involved in assessment of the major, with little if any attention to
the students with a psychology minor or those passing briefly through our large introductory
courses in psychology.
Why should departments of psychology be concerned with assessment? Apart from
legislative mandates, which may ultimately filter down to the department level at a particular
institution, colleges and universities may have particular priorities that make assessment more or
less attractive. It is probably no surprise, for example, that Jackson and Griggs found regional
schools more likely to be involved in assessment than were nationally-recognized schools. For
most, such assessment is probably more akin to measures of teaching effectiveness than to a
strong research-and-grant agenda.
Beyond this matter of institutional priorities, of course, is the question of what might be
gained by departments as a result of assessment efforts. Morgan and Johnson (1997) suggested
that assessment provides information for accountability, but it also guides the development of
Major Field Test
4
program goals and the setting of priorities. In their view, the end goal of assessment should be
program improvement. The process should be ongoing, multidimensional, and connected to
departmental goals. No easy task! And within their own department, those authors struggled to
balance “the lofty but worthwhile goals of assessment and the practical constraints of achieving
meaningful measurement” (p. 156). Ultimately, their strategy combined elements of knowledgebased measures along with student experience measures.
From their survey results, Jackson and Griggs reported that exit exams were the most frequent
kind of assessment. Most of these used standardized tests of knowledge, with the most common
choice being the Major Field Test (MFT) designed by the Educational Testing Service (ETS). At
the same time, other departments used their own multiple-choice tests, perhaps to gain greater
flexibility in the scope of the assessment effort (or to save on costs associated with ETS testing).
Sheehan (1994), for example, reported on a local test of psychology used at the University of
Northern Colorado as part of a multi-method assessment effort. The local test was used to assess
non-majors as well as psychology majors at each level, from freshman through senior year. The
results suggested that psychology knowledge increased as majors progressed through the years.
Test accuracy for non-majors was 40%, for freshmen and sophomore majors 55%, for junior
majors 65%, and for senior majors 71%. Of course, numerous questions might be raised about
any “improvement” interpretation of these “cross-sectional” differences. And such questions
might range from selective attrition among majors over the college years to subtle changes in the
nature of course work taken over the college years. For Sheehan’s study, there were no test
scores for those psychology majors who dropped out, nor was there any analysis of performance
within particular sub-areas of psychology. Either kind of information might shed light on the
apparent improvement seen across the college years.
The present study began as an exploratory project to examine the test performance of senior
psychology majors in relation to previous course work taken in psychology. It was
acknowledged from the outset that attrition and self-selection might contribute to test
performance scores. That is, test performance might be related to definite aspects of the course
Major Field Test
5
work that had been taken or not taken by seniors during their college years. For example,
students with strong interests in abnormal or clinical aspects of psychology might take more
courses, do well in them, and receive higher scores on the ETS sub-test in this area. Those with
little interest might avoid such courses and generally score lower on the ETS sub-test.
Using the ETS Major Field Test in Psychology, I examined senior psychology majors’
performance on sub-area test scores provided by ETS for individual test-takers. The scores for
two groups of seniors, all taking the test as a departmental requirement, were examined in
relation to readily-available characteristics of their course work history, namely, courses taken in
a sub-area, grades earned in those courses, and recency of those courses. Scores were analyzed
separately for these two groups, one from a southwestern public university and the other from a
midwestern public university.
Method
Participants
Fifty-nine senior psychology majors at Northern Arizona University took the MFT, during the
fall or spring of their senior year, as a requirement for the major. About 81% were white (similar
to data reported by the American Psychological Association in 1995 suggesting that 87% of
psychology degrees were awarded to white students). About 73% were female (same as the
APA figure of 73% for women receiving baccalaureate degrees in psychology). This is the
southwestern sample. Additionally, there were 48 senior psychology majors at the University of
North Dakota who took the MFT under similar conditions (i.e., requirement for major). About
85% were white and 83% were female. This is the midwestern sample.
Procedure
The MFT was administered, according to standard ETS procedures, to groups of seniors at the
two universities. MFT scores were then calculated and provided by ETS. Transcripts of course
work were obtained for each participant and were labeled with a code number (rather than
student’s name). A database was created for each of the university samples.
Major Field Test
6
For each participant, overall grade point average (GPA), at the time of taking the MFT, was
entered as well as the overall MFT score. Scores were also entered for the four MFT sub-areas:
Learning/Cognition (LC), Perception/Comparative/Ethological/Sensation/Physiological (PSP),
Clinical/Abnormal/Personality (CAP), and Developmental/Social (DS).
From the transcript, it was possible to identify for each participant those completed courses
relevant to the MFT sub-areas. The psychology curriculum was similar at the two universities,
but differed with regard to electives in some sub-areas. Course relevance to the various subareas was determined as follows.
For the LC sub-area, students could have taken course work in Learning, Cognitive
Psychology, Behavior Modification (midwestern sample only), and/or Animal Intelligence
(southwestern sample only). For the PSP sub-area, possible courses were Physiological
Psychology, Perception, Psychophysiology (midwestern sample only), Psychophysiology of
Drugs and Behavior (southwestern sample only), and Ecological Approaches to Perception and
Action (southwestern sample only). For the CAP sub-area, possible courses were Personality,
Abnormal Psychology, and Clinical Psychology. And for the DS sub-area, possible courses were
Social Psychology, Developmental Psychology, Adult Development and Aging, Child and
Adolescent Psychology (southwestern sample only), Advanced Developmental Psychology
(midwestern sample only),and Advanced Social Psychology (midwestern sample only).
Each participant’s transcript was examined for the number of completed courses relevant to a
MFT sub-area, the recency of the most recent completed course within a sub-area, and the mean
GPA of those completed courses in the sub-area. These three values were entered in the
database for each of the four MFT sub-areas.
Results
Table 1 shows overall MFT scores of the two samples in relation to ETS normative data as
well as overall GPA scores. The normative values differ for the two samples because ETS
released a new version of the MFT, with new normative scores, during the time between the
testing of the two participant samples.
Major Field Test
7
____________________
Insert Table 1 about here
____________________
In separate analyses of variance for each MFT sub-area, the scores were examined as a
function of number of completed courses in the sub-area (quantity), grades earned in such
courses (quality), and time since taking any such course (recency).
In the analysis on quantity, Table 2 shows sub-area test scores in relation to the number of
courses completed in the sub-area. The number of possible courses varied, for both participant
samples, for reasons related to departmental resources and course frequencies, as well as
students’ interest in the courses. Thus, in the Learning-Cognition sub-area and in the ClinicalAbnormal-Personality sub-area, there were three possible courses each. In the PerceptionPhysiological sub-area, there were three or four. And, in the Developmental-Social sub-area,
there were four or five. In the Learning-Cognition area, and in the Developmental-Social area,
there was no clear relationship between the sub-area test score and the number of courses taken.
But such a pattern did seem apparent in the Perception-Physiological area and in the ClinicalAbnormal-Personality area. In these latter cases, the higher MFT sub-area scores were obtained
by those with more course work in the area. Of course, this may reflect knowledge accumulation
or simply self selection. Interestingly, the quantity of PSP courses was significantly related to
that MFT sub-area score for the southwestern sample whereas the quantity of CAP courses was
significantly related to that MFT sub-area score for the midwestern sample. These effects for
PSP and CAP are not completely absent when the participant samples under consideration are
switched, but they are only marginally significant at best.
____________________
Insert Table 2 about here
____________________
Major Field Test
8
The analysis on quality, shown in Table 3, examined MFT scores for students in different
categories based on grades in relevant courses, regardless of number of such courses. Grade
groupings were chosen to facilitate meaningful comparisons between groups. For the
southwestern sample, students with higher grades in relevant courses had higher MFT scores for
all four sub-areas. For the midwestern sample, this trend was only significant for the CAP and
DS sub-areas. It is worth noting that for the other two sub-areas (LC and PSP), many of the
seniors had chosen not to take course work. Yet, as seen for the “None” category, they tended to
do as well on the sub-area test as many of the weak students who had taken courses relevant to
the particular sub-area. This is most noticeable for the LC sub-area.
____________________
Insert Table 3 about here
____________________
The analysis on recency of sub-area course work, shown in Table 4, may have been affected
by the fact that, in some sub-areas, courses tended to be taught infrequently or were offered at a
“higher level” than in other sub-areas. For example, in the Learning-Cognition area, and in the
Clinical-Abnormal-Personality area, most of the relevant courses were typically taught at the
senior (400-) level. This was true for both universities. Thus, students may have taken a subarea course early (e.g., sophomore year) and then, if still interested, taken something more during
the senior year. In contrast, most courses in the Developmental-Social area were offered at the
sophomore or junior level, and so students could have turned away from this area by the senior
year. The category groupings here represent a scoring compromise in which course work is
labeled with respect to the most recent work done in the sub-area. That is, it was labeled as
concurrent (with the semester of MFT administration), as taken in the last semester, or as from
some earlier semester. Of course, this approach ignores how early any course work began in the
sub-area. There is some indication that MFT scores in the PSP sub-area did vary significantly
with the recency of course work, whereas differences based on recency were not significant for
Major Field Test
9
the LC and DS sub-areas. Results were mixed for the CAP sub-area with recency related to
MFT sub-area scores in the midwestern sample but not in the southwestern sample.
____________________
Insert Table 4 about here
____________________
Discussion
Across all four sub-areas, MFT scores were generally found to be related to quality of course
work, i.e., grades in relevant courses. Those with higher grades generally had higher MFT subarea scores. This finding was most evident for the slightly larger southwestern sample. It was
also evident for two sub-areas in the midwestern sample. For the latter group, failure to find a
significant relationship for the LC and PSP sub-areas may be due to the inclusion in the omnibus
test of large numbers of seniors with no course work in these sub-areas.
Quantity and recency of course work were related to MFT performance for some sub-areas
more than others. Where significant effects occurred, those with more courses in a sub-area
generally scored higher on the MFT sub-area test than those with fewer courses. Those with
more recent course work in the sub-area generally scored higher than those with more distant
course work in the sub-area.
Quantity of relevant courses related to MFT scores for the PSP and CAP sub-areas, but the
participant samples differed as to which provided the stronger evidence for this effect on a
particular sub-area. Future work might examine the content overlap and spacing of courses in
sub-areas such as these. Certainly, psychology departments may differ in the uniqueness or
redundancy of courses within a sub-area, whether deliberate or unintended.
Recency of relevant course work was clearly related to MFT scores in the PSP sub-area for
both participant samples. A similar effect of recency occurred for the CAP sub-area in the
midwestern sample but not in the southwestern sample. However, in view of the restriction of
range for the midwestern sample (i.e., no participants in the Concurrent and None categories), it
Major Field Test
10
may be best to reserve judgment about recency effects in the CAP sub-area. Informal analysis
suggests that the recency effect is most likely for sub-areas with extensive detail and terminology
that is unfamiliar, perhaps unpredictable. Such material (often found in the PSP courses) may be
less integrated and less well retained over time.
The patterns found here in the analyses of quality, quantity and recency of course work were
generally similar for the two participant samples. For these two samples of senior psychology
majors, the requirements and electives for the major were comparable and the extent of course
offerings in the sub-areas was similar. Overall college GPA and overall MFT scores were
similar. As shown in Table 5, there were similar findings on the quantity, quality, and recency
analyses for both samples. First, it seems clear that MFT sub-area scores are related to GPA in
the sub-area course work. This is evident across all sub-areas, though not always for both
samples. It is possible that knowledge accumulation in these courses contributes to MFT
performance, but it could also be that student interest or general aptitude underlies the grades and
MFT scores. Second, it appears that the quantity and recency findings vary with the sub-area of
psychology. In the clearest case, it appears that MFT performance in the PSP area is strongly
related to both quantity and recency of course work. Perhaps because of the
biological/scientific/technical focus in this area, there is more of a cumulative effect of course
work and, in addition, more rapid forgetting of details. Finally, it is worth noting that most of the
significant effects in the present study focus on the PSP and CAP sub-areas. It is here that one
finds the key courses called Abnormal Psychology and Physiological Psychology (or something
similar). Stoloff and Feeney (2002) have suggested that these two (along with Social
Psychology and Counseling Psychology) are perhaps the only elective psychology courses that
help to improve MFT sub-area scores.
____________________
Insert Table 5 about here
____________________
Major Field Test
11
The present study provides a demonstration of one kind of program assessment, namely the
measurement of the knowledge base of senior majors. The findings reported here may be
specific to the curriculum offerings, their frequencies, and the students’ involvement with them
in particular departmental programs. Nevertheless, the assortment of interpretational issues seen
here is likely to be encountered at other psychology departments as well. And, indeed, there is
considerable agreement between these results from one southwestern and one midwestern
university.
The progressive increase in psychology knowledge through the college years (described by
Sheehan) may be tied in specific ways to the quantity, quality, and recency of students’ course
work in psychology. These variables are interrelated and operate within a local context of
curriculum offerings, sequencing, prerequisites, grading standards, student interest, and perhaps
other factors less obvious. Even within such constraints, however, factors such as recency,
quantity, and quality of course work (whatever they may index) have been shown to relate to exit
exam performance in the sub-areas of psychology.
Departments interested in a knowledge-component of program assessment should consider
the impact of these factors on assessment outcomes. For example, if the sequencing and course
work in a particular area are causally related to standardized test performance, then a program
might well consider experimenting with the structure, the number, the overlap, and the
sequencing of courses for the major, especially if the goal is to achieve stronger standardized
scores. On the other hand, if student interest and self-selection are the primary factors
underlying the MFT findings here, then such engineering of the curriculum may have little
impact (on increases in standardized scores). Stoloff and Feeney (2002, p. 96) foresaw this when
commenting that students doing best on the MFT were those “with excellent overall academic
skills who completed many of the key courses” whereas those who did poorly on the MFT may
have been those who have “poorer overall academic achievement and probably did not select”
the courses with most relevance for the MFT. Perhaps, as in other areas, the “rich get richer and
Major Field Test
the poor get poorer.” The challenge in using knowledge-based program assessment will be to
help the poor get richer.
12
Major Field Test
References
Blumenstyk, G. (1988, July 20). Diversity is keynote of states’ efforts to assess students’
learning. The Chronicle of Higher Education, A17, A25-A26.
Halpern, D. F. (1988). Assessing student outcomes for psychology majors. Teaching of
Psychology, 15, 181-186.
Jackson, S. L., & Griggs, R. A. (1995). Assessing the psychology major: A national survey of
undergraduate programs. Teaching of Psychology, 22, 241-243.
Morgan, B. L. & Johnson, E. J. (1997). Using a senior seminar for assessing the major.
Teaching of Psychology, 24, 156-159.
Sheehan, E. P. (1994). A multimethod assessment of the psychology major.
Teaching of
Psychology, 21, 74-78.
Stoloff, M. L. & Feeney, K. J. (2002). The Major Field Test as an assessment tool for an
undergraduate psychology program. Teaching of Psychology, 29, 92-98.
13
Major Field Test
Notes
1. Portions of this research were presented at the annual meetings of the Rocky Mountain
Psychological Association, in Reno NV, April 2001, and Denver CO, April 2003.
2. Send correspondence to Robert E. Till, Department of Psychology, P. O. Box 15106,
Northern Arizona University, Flagstaff, AZ 86011; e-mail: [email protected].
14
Major Field Test
Table 1
Characteristics of the Two Participant Samples
Measure
Southwestern
Midwestern
Seniors
ETS Norms
Seniors
ETS Norms
M
153.8
157.7
156.6
156.6
SD
13.4
12.0
12.7
13.7
N
59
3,071
48
Overall MFT
Overall GPA
3.21
13,204
3.26
Note. Overall MFT scores correlated with overall GPA: for the southwestern sample, r(57) =
0.56, p<.001; for the midwestern sample, r(46) = 0.58, p<.001.
15
Major Field Test
16
Table 2
Number of Completed Sub-Area Courses in Relation to MFT scores for
Learning/Cognition (LC), Perception/Comparative/Ethological/Sensory/Physiological (PSP),
Clinical/Abnormal/Personality (CAP), and Developmental/Social (DS) Sub-Areas.
Number of Courses
MFT Sub-Area Scores
_____________________________________________________________
LC
M
SD
PSP
N
M
CAP
DS
SD
N
M
SD
N
0
71.0 13.1
3
69.8
24.6
4
7
61.1 15.3
8
57.6
M
SD
N
52.9 12.5
20
13.2 29
55.7
14.6
22
11.9 22
57.2
15.8
13
42.5
7.0
4
Southwestern Sample
3 or 4
2
51.7
13.7
1
57.8
12.4 31
57.0 13.5 24
52.0
None
52.8
15.9 21
51.9
48.8
Omnibus F
F(2,56) = 1.06
10.1 24
F(3,55) = 2.84 **
9.6
4
F(3,55) = 2.55 *
F(3,55) = 1.30
Midwestern Sample
3 or 4
0
0
1a
2
57.7
7.8
7
90.0
-
1
53.3 13.3
27
62.2
12.7 20
11.0 14
56.8
11.8 27
None
Omnibus F
56.1
F(2,45) = 0.51
F(1,45) = 2.27
65.5 15.2
11
61.5
15.5
11
58.8
9.5
26
57.5
14.9
18
53.1 12.3
11
55.1 13.2
17
0
F(2,45) = 3.18 **
46.5
12.0
2
F(3,44) = 0.82
Major Field Test
a
*
Not included in the ANOVA on PSP means.
p < .10
**
p < .05
17
Major Field Test
18
Table 3
GPA for Sub-Area Courses in Relation to MFT scores for
Learning/Cognition (LC), Perception/Comparative/Ethological/Sensory/Physiological (PSP),
Clinical/Abnormal/Personality (CAP), and Developmental/Social (DS) Sub-Areas.
GPA
MFT Sub-Area Scores
_____________________________________________________________
LC
M
PSP
SD
N
M
SD
64.8 13.1
CAP
DS
N
M
SD
N
M
SD
N
15
56.9 14.7
18
61.6
13.7
17
Southwestern Sample
4.0
67.7
9.2
10
3.3-3.8
63.5
17.7
2
68.5
24.7
2
67.0 14.3
11
60.2
9.6
5
2.8-3.2
54.5
10.9 14
53.9
12.3
11
54.6
12
54.8
14.0
19
2.0-2.7
48.8
10.6 12
52.7
12.5
7
48.4 12.3 14
45.6
11.1
14
None
52.8
15.9 21
51.9
10.1 24
48.8
42.5
7.0
4
Omnibus F
F(4,54) = 3.51**
9.3
9.6
4
F(4,54) = 3.46 ** F(4,54) = 3.60 **
F(4,54) = 4.22 ***
67.0
Midwestern Sample
4.0
57.3
13.2
18
3.3-3.8
57.5
8.5
4
2.8-3.2
50.0
10.6
10
62.0
2.0-2.7
39.5
7.8
2
None
56.1
11.0
14
17.4
10
61.3
12.9
23
65.2
12.8
17
0
65.9
11.7
8
61.1
14.2
9
17.4
8
52.0
8.1
11
49.5
10.9
16
55.0
13.9
3
54.2
9.8
6
49.2
16.6
4
56.8
11.8 27
0
46.5
12.0
2
Major Field Test
Omnibus F
*
p < .10
F(4,43) = 1.59
**
p < .05**
***
F(3,44) = 1.91
p < .01
19
F(3,44) = 3.01 ** F(4,43) = 4.08 ***
Major Field Test
20
Table 4
Recency of Course Work in Relation to MFT scores for
Learning/Cognition (LC), Perception/Comparative/Ethological/Sensory/Physiological (PSP),
Clinical/Abnormal/Personality (CAP), and Developmental/Social (DS) Sub-Areas.
Most Recent
MFT Sub-Area Scores
Course
_____________________________________________________________
LC
M
PSP
CAP
DS
SD
N
M
SD
N
M
SD
N
M
SD
N
Southwestern Sample
Concurrent
57.6
12.2
16
61.1
17.3
13
53.4
12.9
12
53.4
14.2
17
Last Semester
54.9
14.3
9
62.8
10.3
12
59.6
14.2
20
54.8
14.0
18
Earlier
56.7
13.1
13
52.4
11.9
10
54.8
14.7
23
56.8
14.5
20
None
52.8
15.9
21
51.9
10.1
24
48.8
9.6
4
42.5
7.0
4
F(3,55) = 0.41
F(3,55) = 3.09 **
Concurrent
47.8
13.4
74.0
Last Semester
58.2
12.3 20
Earlier
48.5
9.7
10
57.1
None
56.1 11.0
14
56.8
Omnibus F
F(3,44) = 2.15
Omnibus F
F(3,55) = 1.00
F(3,55) = 1.19
Midwestern Sample
*
p < .10
**
p < .05**
4
***
11.7
8
59.2
14.0
9
0
65.2
14.6
16
55.0
15.8
12
10.9
13
55.9
9.5
32
58.2
14.2
25
11.8
27
0
46.5
12.0
2
F(2,45) = 7.34 ***
p < .01
0
F(1,46) = 7.10 ***
F(3,44) = 0.55
Major Field Test
21
Table 5
Significant Relationships between Majors’ Course Work
and Major Field Test Scores for the Two Participant Samples
MFT Sub-Area
________________________________________________
Course Work Measure
LC
PSP
CAP
DS
**
**
**
***
**
***
GPA in Relevant Courses
Southwestern Sample
Midwestern Sample
Number of Relevant Courses
Southwestern Sample
**
Midwestern Sample
*
**
Recency of Relevant Courses
*
Southwestern Sample
**
Midwestern Sample
***
p < .10
**
p < .05**
***
p < .01
***