Document

CHANGING SUBJECTIVE RATINGS
TO OBJECTIVE SCORES FOR
RANKING GRADUATE APPLICANTS
Stephen D. Oller, Ph.D.
Associate Professor
&
Lydia “Odette” Gonzalez, M.S., CCC-SLP
Clinic Director
Objectives
 Participants will be able
 to describe multiple alternatives to the reference letter
when making admission decisions.
 to list pros and cons for each of the alternative activities
discussed.
 to rank alternative activities discussed according to their
own programs individual needs and abilities.
Motivations for Change
1. Selection of graduate students that will be
successful.
2. Minimize cost and time for both faculty and
students in reviewing applications.
3. Determining measures that are objectively based.
4. Providing opportunities for “second-chance”
students.
The Old Way at TAMUK
 The program used a composite score based on:




GPA
Major GPA
GRE Scores
Reference Letters
 More or less a standard approach. This approach
favors outstanding four year academic performers, but
puts “second-chance” students at a disadvantage.
“Second-Chance” Students
 This may not be the best name, however, these are
students who may have:
 Had very successful junior and senior years, and
questionable freshman and sophomore performances.
 Changed majors.
 Graduated, lived life, and have come back as a second
career, or a dedicated career.
Characteristics of “Second-Chance”
Students
 These characteristics reflect faculty observations at
TAMUK:




Overall GPAs less than 3.0
CSD GPAs 3.2 or better
> Five+ years of college work
> Interaction and participation than typical four year
students
 Demonstrated Maturity
How to Fairly Account for Changes in
Academic Performance Over Time
 We have found that these “second-chance” students
make for good graduate students and clinicians. But
how do we make objective determinations of
performance capabilities?
 GPA scoring modifications.
 Include CSD GPA, but only accounts for 36-45 hours of
often 150+ hours
 Calculate a last 60 hours GPA
The Unscripted Reference Letter
 How to score an unscripted reference letter?
 Existence…
 Length…
 Content
 But what rubric should be used?
 Do the qualifications of the writer provide additional merit?
The Scripted Reference Letter
 Use of Likert-scales to evaluate dimensions such as:




Trustworthiness
Clinical aptitude
Academic aptitude
Etc.
 Place for comments
 Overall rating: Highest
Lowest Recommendation
Interviewing
 We experimented with including interviews.
 What we found:
 Very similar to reference letters, except our faculty was
generating the subjective ratings as opposed to outside
writers.
 High resource cost.
 Additionally, interviews lacked writing samples.
Use of Resources




Faculty/staff time to schedule interviews
Time to conduct interviews
Time to evaluate/discuss interviews
Students time and possibly travel to be interviewed
Writing Samples
 What should candidates write about?




Clinical
Critical evaluation
Academic Assessment
The list goes on…
 How do we score samples?
 Subjectivity in setting up the scoring system
 Subjectivity in scoring the samples
Where We Are Now
 Spring 2014 we did away with interviews and letters.
 Instead we opted for a short questionnaire asking for evidence of
participation in the following:

 1. Have you had any organizational leadership roles, e.g., officer in campus
organization, officer in an off campus organization, etc.?
 2. Did you participate in any academic programs outside your normal curriculum,
e.g., Honors, McNair?
 3. Did you participate in any collegiate activities or programs, e.g., Forensics,
Athletics, Theater, etc.?
 4. Have you presented research at a National level (i.e. ASHA), State level (i.e.
State Speech and Hearing Association), or Regional conference? Attach a copy of
the program(s) where your name appears.
 5. Were you an author on a published or accepted academic manuscript? List
reference.
Justification and Scoring
 There is obviously some subjectivity still involved in
choosing and ranking the activities. However, we
have moved to a binary participated/no-evidence
decision upon awarding points.
 These were activities that as a faculty we deemed
valuable experiences to our graduate students.
 We continue to refine the scoring.
Issues
 The list of activities is not comprehensive.
 Value of activities is still subjectively determined.
 Confusion on the part of respondents regarding how to
answer some questions.
 Most importantly: across the last five years our PRAXIS
pass rate and student completion rate have not
significantly changed, i.e., prior to the changes in the
admission process our students were succeeding and
meeting our goals and the CAA goals for MS students.
Is It Worth It
Pros
Cons
 While subjectively chosen, all
admission criteria are objectively
scored.
 Minimized processing time for
candidates, the questionnaire is
faster to score than reference
letters, and much faster than
interviews.
 Second-chance students have
additional measures of evaluation to
be more competitive.
 There has not been a drastic
improvement in overall performance
in the graduate program according
to measures in place, time to
completion; PRAXIS pass rate; job
placement
 Finer-grained academic tracking
measures will be needed
 E.g., productivity of student
publishing/presenting; scores on
PRAXIS, increase extra-curricular
activities at the graduate level, etc.
Conclusion
 What we have seen is that our current method of applicant
evaluation gives second-chance students a slight
advantage over previous scoring methods.
 There may be some clinical utility, in that the diversified
student is arguable more prone to benefit from
instruction, and more adaptable to changing therapeutic
needs.
 The time burden on for applicant selection by faculty has
been significantly reduced.
 Comments? Suggestions?