Direct and Indirect Measures of Learning Outcomes in an

Joumal of Social Work Education, 49: 408-419, 2013
Copyright © Council on Social Work Education
ISSN: 1043-7797 print/2163-5811 online
DOI: 10.1080/10437797.2013.796767
SV
Taylors. Francis Croup
Direct and Indirect Measures of Learning Outcomes in
an MSW Program: What Do We Actually Measure?
Orly Calderón
This study offers a unique perspective on assessmetit of learning b> comparing results from direct
and indirect measures in a social work graduate program across two campuses of a single university.
The findings suggest that students' perceptions of learning are not necessarily reflective of content
and applied skills mastery. Perception of learning appears to be a separate construct from actual
learning, and it may reflect the students' satisfaction with their experiences in the program, rather
than their attainment of content and skills. Thus, students' satisfaction with their educational experience deserves the attention of educators and administrators who are interested in improving program
quality.
Postsecondary education programs engage in learning outcomes assessment to improve
curriculum content and delivery and to satisfy accreditation requirements set by accrediting agencies. Price and Randall (2008) notes that learning outcomes can be measured directly (namely,
assessing students' mastery of content or skills) or indirectly (i.e., assessing opinions or attitudes toward learning). Similarly, Nichols and Nichols (2005) distinguishes between indicators
of knowledge and skills attainment (direct learning outcome measures) and attitudinal indicators
of perception of knowledge and skills attainment (indirect learning outcomes measures).
Traditionally, social work education programs have used student-focused direct and indirect
measures (e.g., tests, papers, and students' course evaluations) as assessment strategies that exist
on a continuum and that assess the same construct: namely, students' learning (Holden, Barker,
Meenaghan, & Rosenberg, 1999). However, more recently, experts point out the differences in
the relative utility of direct and indirect measures of learning outcomes. For example, Suskie
(2009) stated that direct measures of students' learning are "visible . . . evidence of exactly what
students have . . . learned" (p. 20). In contrast, the very construe! validity of indirect measures as
indicators of students' learning has been criticized as weak and inaccurate (Allen, 2004). Suskie
(2009) noted that such measures merely provide "proxy signs tha: students are probably learning"
(p. 20) and therefore are not convincing.
Allen (2004) distinguishes between direct measures that assess actual learning and indirect measures that assess perception of learning. The first category includes standardized and
locally developed embedded assignments and course activities, portfolios of students' work.
Accepted: October 2011
Orly Calderón is associate professor at Long Island University and an NYS licensed psychologist.
Address correspondence to Orly Calderón, Long Island University, 720 Norihem Boulevard, Brookville, NY 11548,
USA. E-mail: [email protected]
DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES
409
and competence interviews. The latter includes surveys, interviews, reflective essays, and focus
groups that target students' perceptions of their learning. Though both types of measurement
are limited, the consensus among experts in outcome assessment of learning in higher education is that despite the resource cost involved in developing direct measures (Allen, 2004), such
strategies feature high construct and content validity in terms of assessing students' learning.
The field of social work education has been considering issues of assessing learning outcomes,
leading to recent shifts in the policies of its accrediting agency, the Council on Social Work
Education (CSWE). Attainment of learning has been redefined in terms of actual practice behaviors (CSWE, 2008) that presumably require direct measures of actual learning while possibly
diminishing the value of indirect measures of perceived learning.
Within the field of social work, several direct assessment strategies have been used successfully. Adams (2004, p. 121) described the use of the Classroom Assessment Techniques (CAT)
in assessing ongoing leaming in a social welfare policy class. These brief assessment tools,
which may include anonymous polls or brief responses to open ended questions, help instructors identify the level of content mastery and comprehension that students attain as the course
progresses. Adams found that data gathered through CATs are useful for identifying and ultimately minimizing barriers to learning knowledge, values, and skills in a social welfare policy
class.
Regehr, Bogo, Regehr, and Power (2007) developed an evaluation system that helped field
instructors to better assess students' performances in the field practicum and to identify students
who were experiencing difficulties. Regehr et al. discovered that use of rating scales to assess
students' behaviors in the field was not useful in effectively evaluating the competency level
of students' performances. Instead, they developed an instrument composed of vignettes that
described contextual practice behaviors. The participating field instructors were asked to match
the practice behavior patterns of their second year master's of social work degree (MSW) students
with the scenarios described in the vignettes. These vignettes were assigned 3core values ranging
from exemplary to unsuitable for practice, but to avoid a response bias, the score assignment of
each vignette was not revealed to the field instructors. The findings indicated that field instructors
were able to reliably identify the competency level of their students by matching their students'
behaviors with several vignettes with the same score values. Regehr et al. point out that the
vignette matching method of assessment was effective in identifying students who had exhibited
potentially problematic performance levels.
Smith, Cohen-Callow, Hall, and Hayward (2007) assessed MSW students' development of
critical thinking skills in the context of critical appraisal of research. Using a pre- and posttest
design. Smith et al. administered the Critical Appraisal Skills Program, an aptitude test, to MSW
students enrolled in an introductory research class. The results suggested a small but statistically
significant increase in students' correct responses to several of the test's items on the posttest,
compared to the pretest. The authors described the utility of such assessment in helping to
shape social work research curriculum to focus on training students to be effective consumers
of research.
Alter and Adkins (2006) used direct measures of learning outcomes to assess writing skills of
applicants to an MSW program. Students were asked to produce a writing sample in response to a
prompt about a case study that they had previously discussed during their orientation. To measure
students' writing proficiency, the writing samples were assessed along the following dimensions:
(a) students' understanding of the prompt; (b) students' abilities to extract sufficient and relevant
410
CALDERÓN
information from the case study to support their written positions; (c) students' abilities to logically organize their written responses; (d) students' abilities to use a persuasive voice in their
writing; and (e) students' abilities to demonsti-ate use of appropriate writing mechanics (Alter
& Adkins, 2006, p. 344). The authors described how the results of this direct assessment of students' writing skills can advance a movement toward a social work curriculum that provides more
support for writing skills.
Concomitant with the focus on developing and using direct measures of learning outcomes,
much work has been done during the past decade to develop valid and reliable indirect measures
of learning in social work programs. For example, these measures assess students' perceptions
of self-efficacy (Holden, Anastas, & IVIeenaghan, 2003, 2005; Holden, Barker, Rosenberg, &
Onghena, 2008; Holden, Meenaghan, Anastas, & Metrey, 2002). Such assessment instruments
derive their construct and criterion validity from Bandura's definition of perceived self-efficacy
and its relationship to task performance and perseverance (as cited in Holden et al., 2002). Results
from studies that test the psychometric properties of self-efficacy scales indicate that they are
valid, reliable, and reasonably correlate with measures of actual performance (Holden et al., 2003;
Holden et al., 2008), especially in relationship to mastery of specific content areas (Holden et al.,
2005).
However, evidence also exists to suggest that indirect measures of learning are not good predictors of actual learning as measured by mastery of content and skills. For example. Fortune, Lee,
and Cavazos (2005) found that social work students' ratings on achievement motivation scales
(including self-rating of skills and self-efficacy) were not significantly correlated with the field
instructors' evaluations of the students' performance. Outside of the social work education field.
Price and Randall (2008) found that results from students' knowledge surveys did not correlate
with actual knowledge in a management program research class at Georgia Southern University.
The question arises whether direct and indirect measures of learning outcomes exist along a
continuum of evaluation strategies (a perspective that may indeed discourage the use of indirect
measures in the assessment of actual learning) or whether they represent a two-factor solution
that provides data on two separate constructs (a perspective that may validate indirect measures
as indicators of learning experiences other than actual content mastery and skills attainment).
The current study offers a unique opportunity to address this question by comparing results from
direct and indirect measures of learning outcomes in a social work graduate program across two
campuses of a single university in the United States. Though MSW students on either campus
followed the same curriculum in terms of content and skills acquisition, they represent two separate and independent cohorts. Thus, although the data were collected in a single university, they
reflect scores from two independent groups, thereby increasing the validity of this study.
This study is based on educational outcome data that was collected at the end of the
2007-2008 academic year (AY) from students in the foundation year of an MSW program at
a large private university in the New York metropolitan area. The study tests the hypothesis that
there will be significant differences between the two cohorts' scores on direct and indirect measures of learning outcomes. Although the two campuses are located in different geographical
settings (urban and suburbia), the location setting per se is not hypothesized to be an important
factor. Therefore, and for the sake of convenience, the two cohorts are referred to as Cohorts A
and B, respectively, throughout this article. Scores on direct measures of learning outcomes are
operationally defined using two separate assessment mechanisms: (1) students' scores on the Area
Concentration Achievement Test, Version A (PACAT, n.d.), and (2) field instructors' ratings of
DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES
411
Students' performances as reflected on the Field Instructors' Evaluation of Student Performance
(Barretti, 2005). Scores on indirect measures of learning outcomes are operationally defined as
students' ratings of course objectives achievement as reflected by scores on the Students' Course
Objective Achievement Survey (S-COAS), an evaluation instrument developed within the social
work department of the university. A detailed description of these instruments appears in the
Instruments subsection of the Method section.
METHOD
Participants and Sampling Procedures
This study uses an availability sample. Educational outcome assessment data were collected from
students and field instructors of the MSW program's foundation year. The participants included
64 students, and 80% of those attended Campus B. The field instructors were social workers from
various social service agencies, schools, and hospital agencies in close geographic proximity to
the two campuses.
Procedure
One faculty member was assigned to oversee the program's outcome assessment on both campuses. This person was responsible for overseeing data collection and analysis and then providing
it to faculty, interested students, and other stakeholders in a timely manner for the purpose of
curriculum and program development. The data reported in this article were collected as part
of the program's required ongoing assessment procedure. The university's Office of Sponsored
Research has granted this author permission to share these data in the aggregate via this article.
Data regarding achievement of course objectives (indirect measures) were collected from students via surveys distributed in class on the last day of each course. The students completed these
surveys voluntarily and anonymously. Data regarding students' mastery of foundation curriculum
content (direct measures) were collected at the end of spring semester of the concentration year
via a standardized test that was administered to all second year students during class time in the
last week of the semester. Participation in this test was voluntary, and scores from this particular
measure were not factored into the students' grades or academic records. This article presents
data from the content mastery test, which was administered at the end of AY 2008-2009 to students who completed the indirect measures regarding achievement of the same content objectives
the year before. Hence, the results reflect data collected regarding foundation year curriculum
despite administration of the test during the concentration year (which is a result of strategic
constraints in the program's delivery of certain foundation year curriculum). Data regarding
achievement of practical skills that correspond to program objectives (direct measures) were collected at the end of each semester during the AY 2007-2008 from the field instructors of the same
students who completed the indirect measures. Although these surveys are not anonymous and
are used for grading students' performance, this author did not have access to students' identifying information when analyzing these data, thus maintaining confidentiality and avoiding an
experimenter's bias through a blind design.
412
CALDERÓN
Instruments: Indirect Measures of Learning Outcomes
Students' Course Objective Achievement Survey (S-COAS). This instrument was originally designed in compliance with the CSWE Educational Policy and Accreditation Standards
(EPAS; 2001) and was recently revised (after the completion ofthe current study) to comply with
the new CSWE EPAS (2008) and to reflect assessment of learning competencies rather than learning objectives. This instrument, which is a result of a collaborative effort ofthe social work faculty
members at the university, assesses the degree to which students believe they have achieved the
course objectives as outlined in the course syllabus. The instrument derives its content validity
from the CSWE EPAS (2001), which has served as the basis for the course objectives for each
course in the program. Students are asked to rate the degree to which course learning objectives
have been met using a 5-point Likert scale ranging from 1 = This objective was met to a very
small degree to 5 = This objective was met to a very great degree. Consistent with other selfreport measures of indirect learning (Suskie, 2009), this instrument draws its construct validity
from the assumption that students' perceptions of course objectives achievement is indicative of
students' perceptions of their actual learning of course content. The S-COAS features a good level
of reliability (a Cronbach's alpha reliability coefficient of .76). A sample S-COAS instrument
appears in Figure 1.
Instruments: Direct Measures of Learning Outcomes
Area Concentration Achievement Test (ACAT) Version A. This standardized test published by PACAT (n.d.) assesses students' comprehension and mastery of CSWE curriculum
content areas. The ACAT has a mean of 500 and a standard deviation of 100. Our students took
Version A. This form is thought to have better content validity than other available forms because
it feattires eight content areas that reflect CSWE foundation curriculum, and it corresponds well
with our program curricular objectives. The eight content areas of Version A are (1) diversity,
(2) populations at risk, (3) social and economic justice, (4) values and ethics, (5) policy and services, (6) social work practice, (7) human behavior in the social environment, and (8) research
methods. The ACAT features moderate reliability, which is evident through its mean odd-even
reliability coefficient of .67.
Field Instructors' Evaluation of Student Performance (FIESP). This quantitative instrument was designed originally by Marietta Barretti (2005) of the Social Work Department at
Long Island University; it derives its content validity from the 2001 CSWE EPAS. Each program learning objective corresponds to one of the objectives stated in the 2001 EPAS and has
been operationally defined to reflect a corresponding practical skill. Using a 6-point Likert scale,
field instructors are asked to rate the students' performances on each of these skills. The scale
features the following response options: 1 = No evidence of behavior; 2 — Behavior present,
in a minimum degree; 3 = Behavior present to some degree; 4 = Behavior present most of the
time and to the degree expected; 5 = Student surpassed expectations of a graduate social work
student; and N = No opportunity to observe (Long Island University, n.d.).
In 2009 the instrument was revised to ensure its compliance with the new CSWE EPAS (2008);
it is currently being used for ongoing evaluation of the program learning competencies. Examples
DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES
413
MSW PROGRAM
STUDENTS' COURSE OBJECTIVE ACHIEVEMENT SURVEY
SWK 677 (Soc. & Psych, aspects of subs, abuse)
Spring 2008
Section
Semester/Year:
This survey is your opportunity to evaluate individual courses and the curriculum. Your
responses will be compiled and analyzed and will be used by faculty and committees to make
changes in the curriculum where needed. A student volunteer should collect and return the
surveys to the Social Work Department
Please evaluate the degree to which each course objective has been met using the following scale:
1.
2.
3.
4.
5.
Very small degree
Small degree
Moderate degree
Great degree
Very great degree
By the end of this course, students will:
1. Identify various governmental policies regarding alcohol and drug use and the practices
of the entities charged with implementing these policies.
2. List at least five (5) socio-ethno-cultural groups and be able to discuss at least three
(3) specific strategies for working with each group.
3. Develop a culturally competent style that serves their foundation for their work with
persons who are substance abusers.
FIGURE 1 Sample of Students' Course Objective Achievement Survey
(S-COAS) items. Note. MSW = master's of social work degree; Soc. =
sociology; Psych. = psychology; subs. = substance; SWK = social work
(course title).
of FIESP items that correspond to the program's learning objectives appear in Table 1. Reliability
scores for the FIESP version used in this study are not available.
Design
This educational outcome assessment study used a group survey design that compares scores on
direct and indirect measures of learning outcomes across the entire MSW student population from
both campuses (for a detailed discussion of survey design research models, see Engel & Schutt,
2008).
Data Analysis
Achievement of program learning objectives across the curriculum. Descriptive data
are calculated for the S-COAS for Cohorts A and B. The mean score of each course objective
achievement is entered as a data point for calculating achievement of the corresponding program
objective, using a curricular mapping technique. This method of data analysis treats N as the
number of course objectives that correspond to the program's 12 foundation learning objectives.
414
CALDERÓN
TABLE 1
Sample FIESP Items That Correspond to Program's Learning Objectives
Program Objective
FIESP Indicators
Apply critical thinking skills within the context of
professional social work practice.
Understand the value base of the profession and its
ethical standards, principles, and practice.
Practice without discrimination as reflected in
field
practice.
Develops strategies for approaching
differential tasks.
Maintains client confidentiality,
Communicates in a manner that is sensitive
and respectful of clients' diversities.
Note. FIESP = Field Instructors' Evaluation of Student Performance.
rather than as the number of students who have completed the S-COAS. These program objectives
are reflected in 70 course objectives across the foundation curriculum. Essentially, A' reflects the
number of times that students have been exposed to opportunities to attain a specific program
learning objective. Measures of central tendency are then calcula:ed for each program objective.
An independent i-test analysis is used to test for differences between the cohorts' ratings of
objective achievement.
FIESP. Measures of central tendency are calculated for each cohort (i.e., students from
Campus A and B) for each field course. Ratings of indicators of program objectives are averaged to yield a mean score that represents the student's degree of skills' acquisitions associated
with the respective program leaming objectives.
Mastery of curriculum content areas. Results from the ACAT are analyzed by an independent agent from PACAT, Inc. to provide standardized ipsative and normative scores. The scores
reported here were achieved by the same students who had completed the S-COAS and the FIESP
during the AY 2007-2008 to yield a single group data.
RESULTS
Results were analyzed for direct and indirect indicators of content mastery and for direct
indicators of skills (practice behaviors) attainment.
Content Mastery
Indirect measure/curricular mapping of S-COAS. Results from the indirect measure
(curricular mapping of S-COAS) indicate that students on each campus rated all the learning
objectives as achieved to a great degree (M = >3.5). However, students from Campus B rated
achievement of six learning objectives as significantly higher than students from Campus A (see
Table 2).
Direct measure—ACAT. Results on the ACAT were computed separately for the two
cohons. The results indicated that students from Campus A of the MSW program achieved an
overall standard score of 544, which is in the 67th percentile of the standardized comparison
DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES
41 5
TABLE 2
Achievement of Foundation Program Objectives Across the Curriculum: Significant
Differences Between Campuses
Learning Objective
Mean Score
Campus B
Mean Score
Campus A
Significant
Difference
4.28
4.55
4.60
4.35
4.01
3.86
3.83
3.04
((157) = -2.18*
((22) = -3.46*
((21) = -5.33*
((28) = -2.49*
4.42
4.36
3.93
3.96
((28) = -2.35*
((37) = -2.46*
1. Critical thinking
2. Practice without discrimination
3. Advocacy
4. Connection between history and
current practice
5. Evaluate research
6. Communication skills
*p < .05.
TABLE 3
ACAT Content Area Scores
Standard Score
Content Area
Overall
Diversity
Populations at-risk
Social and economic justice
Values and ethics
Policy and services
Social work practice
HBSE
Research methods
Percentile
Campus B
Campus A
Campus B
Campus A
475
456
484
516
441
428
471
507
548
544
508
509
539
521
500
553
538
585
40
33
44
56
28
24
39
53
68
67
53
54
65
58
50
70
65
80
Note. ACAT = Area Concentration Achievement Test; HBSE = human behavior in the
social environment.
group. This means that students from Campus A scored higher than 67% of other MSW students across the United States who took this test during the past 6 years. The results further
indicated that students from Campus B of the MSW program achieved an overall standard score
of 475, which is in the 40th percentile of the standardized comparison group. Table 3 presents the
respective cohorts' standard scores on each of the ACAT's content areas.
Skiiis Attainment
Direct measure—FIESP
Students on each campus demonstrated to their field instructors
that practical skills associated with each of the foundation program's objectives were present
most of the time and to the expected degree evidenced by a mean score of 4.0 or higher on each
416
CALDERÓN
TABLE 4
Mean Scores on FIESP
Course Objective
Campus
Mean
Standard Deviation
Critical thinking
Ethical standards
A
B
A
Practice without discrimination
A
Advocacy
A
History and current practice
A
B
A
B
A
4.26
4.22
4.34
4.39
4.20
4.21
4.30
4.34
4.22
4.21
4.22
4.31
4.10
4.10
4.55
4.66
4.82
4.29
4.36
4.35
4.32
4.31
4.43
4.36
.01
.01
.00
.14
.02
.12
.26
.19
.14
.19
.07
.07
.04
B
B
B
Generalist skills
HBSE
B
Social policy
A
Evaluate research
A
Communication skills
A
B
A
B
B
Use of supervision
B
Organizational change
A
B
.33
.02
.01
.01
.01
.06
.13
.03
.05
.07
.08
Note. FIESP = Field Instructors' Evaluation of Student Performance; HBSE = human
behavior in the social environment.
of the indicators associated with specific program objectives (see Table 4). An independent ttest yielded no significant differences between the performance Levels of students from the two
campuses on any of the practical skills indicators.
DISCUSSION
This outcome assessment study compares the results of direct and indirect assessment of learning
among MSW students. The results provide only a partial support for the hypothesis that differences exist between students' perceptions of their learning and their actual learning. The findings
indicate a consistency between students' perceptions of their learning (indirect measure of learning) and their attainment of practice skills as rated by their field instructors (direct measures of
learning). The findings suggest that students believe they have achieved the program learning
objectives to a great degree, and they have also demonstrated satisfactory presence of practice
behaviors associated with those learning objectives. Further, the perceptions of Cohort A students regarding achievement of program learning objectives is consistent with their demonstrated
mastery of learning as evidenced by their scores on an objective standardized test. (The reported
DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES
417
perceived achievement of learning objectives to a great degree is conceptually consistent with
the above average achievement on a standardized content mastery test.) However, students from
Cohort B demonstrated a difference between their reported perceived learning and their actual
learning. By their reports, they believed they had achieved the program's learning objectives to
a great degree but demonstrated a below average performance on a standardized content mastery
test. Also, it is interesting that students from Cohort B, although demonstrating a lower level of
content mastery compared to students from Cohort A (40th percentile vs. 67th percentile), perceived that they had achieved the program's learning objectives to a greater degree than students
in Cohort A along several dimensions: critical thinking, practice without discrimination, advocacy, connection between history and current practice, evaluating research, and communication
skills.
Essentially, the findings suggest a pattern that mimics an interaction effect for the measure
(direct vs. indirect) x cohort (A and B, respectively) variables, because the cohort that reported
a lower perception of achievement (indirect measure) in the curricular areas of nondiscriminatory practice, advocacy, and research evaluation, actually scored higher on direct measures of
learning within the corresponding content areas of diversity, policy and services, and research
methods. Though it is possible that the ACAT scores (direct measures) could have been affected
by the advanced year curriculum (the test was administered at the end of the second year),
such an effect does not negate the discrepancy between groups on this measure, nor the pattern of scores within group, comparing this measure to the indirect measures. These findings
indicate that students' perceptions of their educational achievement is not necessarily reflective of their abilities to demonstrate mastery of content and applied skills. Thus, it appears
that students' perceptions of their educational success may be a separate construct from actual
educational success. Perception of learning may reflect the students' satisfaction with their experiences in the program, which is not necessarily related to their actual learning. Satisfaction
may be influenced by factors not related to actual learning but to student's perceptions of
the social and emotional components of their learning environment. In the absence of more
information about students' characteristics (i.e., prior educational experiences), it is difficult to
speculate on factors that may affect perception of learning. Nevertheless, and regardless of factors that affect perception of learning, the data indicate that indirect measures address an aspect
of the learning experience that may be independent of actual attainment of content and skills.
Therefore, indirect measures may be useful for evaluation of a construct that, though part of
the educational experience, is entirely different from its original purpose of a learning outcome
indicator.
The findings indicate that a two-factor solution may be a more appropriate approach to assessment of learning outcomes than either a continuum approach or a direct measure approach alone.
Direct measures can effectively assess knowledge and skills attainment, whereas indirect measures can effectively measure students' learning experiences. Additional research is required to
identify factors that play a role in students' ratings ofthe learning experience and to further study
the relationship among those factors, perception of learning, and actual learning.
These findings are particularly timely for the field of social work education in view of the
current EPAS requirement (CSWE, 2008) that social work programs focus on measurement of
actual competencies, thus challenging the indirect approach to assessment and steering social
work programs toward an exclusively direct assessment of learning outcomes. A two-factor solution, supported by the current findings, suggests that as social work education programs move
418
CALDERÓN
toward assessment of competencies (i.e., actual practice behaviors), faculty must not neglect
assessm.ent of other factors that play a role in educational outcomes. Hence, they can capture
the full educational experiences of students and develop evidence-based strategies for quality
improvement of those experiences.
This study is limited because it uses a small sample. In addition, this study uses nonstandardized instruments that have been designed with various scales of measurements, thus making it
difficult to conduct an accurate comparison of the scores. To protect students' anonymity and
confidentiality, access to data from one instrument in particular (the FIESP) was limited, which
further interfered with establishing the psychometric properties of the instrument. Procedures
for future data collection in the department are being modified to address this problem for the
purpose of future research. Finally, this study does not provide insight into factors that affect
students' degree of perceived leaming. Currently, this author is completing a study that incorporates a qualitative component into an educational outcomes assessment to better understand how
students think about their leaming experiences.
Nevertheless, the strength of this study lies in the unique opportunity it affords to compare
educational outcomes of two cohorts within the same program, with the same curriculum content.
The findings, particularly those that reflect differences between cohorts, have implications for
future studies, which can look at factors that may be associated with actual and perceived leaming
outcomes, such as differences in students' academic preparedness and admission criteria and in
the way that content, albeit identical, is infused into the curriculum.
REFERENCES
Adams, P. (2004). Classroom assessment and social welfare policy: Addressing challenges to teaching and learning.
Journal of Social Work Education, 40, 121-142.
Allen, M. I. (2004). Assessing academic programs in higher education. Bolton, MA: Anker Publishing.
Alter, C , & Adkins, C. (2006). Assessing students writing proficiency in graduate schools of social work. Journal of
Social Work Education, 42, 337-353.
Barretti, M. (2005). Field instructors' evaluation of student performance (Unpublished instrument). Department of Social
Work, Long Island University, NY.
Council on Social Work Education. (2001). Educational policy and accreditation standards. Alexandria, VA: Author.
Council on Social Work Education. (2008). Educational policy and accreditcaion standards. Retrieved from http://www.
cswe.org/Accreditation/2008EPASDescription.aspx
Engel, R. I., & Schutt, R. K. (2008). Survey research. In R. M. Grinnell & Y. A. Unrau (Eds.), Social work research and
evaluation: Foundations of evidence-based practice (pp. 265-304). New York, NY: Oxford University Press.
Fortune, A. E., Lee, M., & Cavazos, A. (2005). Achievement motivation and outcome in social work field education.
Journal of Social Work Education, 41, 115-129.
Holden, G., Anastas, I., & Meenaghan, T. (2003). Determining attainment of the EPAS foundation program objectives:
Evidence for the use of self-efficacy as an outcome. Journal of Social Wo.-k Education, 39, 425-440.
Holden, G., Anastas, I., & Meenaghan, T. (2005). EPAS objectives and foundation practice self-efficacy: A replication.
Journal of Social Work Education, 41, 559-570.
Holden, G., Barker, K., Meenaghan, T., & Rosenberg, G. (1999). Research self-efficacy: A new possibility for educational
outcomes assessment. Journal of Social Work Education, 35, 463-476.
Holden, G., Barker, K., Rosenberg, G., & Onghena, P. (2008). The Evaluation Df Self-Efficacy Scale for assessing progress
toward CSWE accreditation related objectives: A replication. Research on Social Work Practice, 18, 42—46.
Holden, G., Meenaghan, T., Anastas, J., & Metrey, G. (2002). Outcomes of social work education: The case for social
work self-efficacy. Journal of Social Work Education, 38, 115-133.
DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES
419
Long Island University, (n.d.). LIU MSW combined program summary of learning outcome assessment AY
1I-Í2. Retrieved from http://www.liu.edu/~/media/Files/CWPost/Academics/SHPN/LIU_Post_SummaryLearning.
ashx-2012-08-08
Nichols, I. O., & Nichols, K. W. (2005). A road map for improvement of student teaming and support services through
assessment. New York, NY: Agathon Press.
PACAT, Inc. (n.d.). Area Concentration Achievement Test. Retrieved from http://www.collegeoutcomes.com
Price, B. A., & Randall, C. H. (2008). Assessing learning outcomes in quantitative courses: Using embedded questions
for direct assessment. Joumai of Education for Business, 83(5), 288-294.
Regehr, G., Bogo, M., Regehr, C , & Power, R. (2007). Can we build a better mousetrap? Improving the measurements
of practice performance in the field practicum. Journal ofSociai Work Education, 43, 327-344.
Smith, C. A., Cohen-Callow, A., Hall, D. M. H., & Hayward, R. A. (2007). Impact of a foundation-level MSW research
course on students' critical appraisal skills. Journal ofSociai Work Education, 43, 481-495.
Suskie, L. (2009). Assessing student learning: A common sense guide. San Francisco, CA: lossey-Bass.
Copyright of Journal of Social Work Education is the property of Routledge and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for
individual use.