Predella Research Group

Version 2.0
Predella Research Group
Recommendations for
Program Improvement Early Alert Metrics
for Shasta College
Background _
The purpose of this report is to recommend quantitative metrics (measures, thresholds, and weights) to
be used to set early alert criteria of instructional programs1 for the Program Improvement Committee
(PIC) at Shasta College. These metrics were developed with consultation to the Academic Senate for
California Community College’s position paper on Program Discontinuance: A Faculty Perspective; Shasta
College’s mission statement and institutional Student Learning Outcomes; and other California
Community College resources for program review, improvement, and discontinuance.
All of the metrics outlined below should be readily available through the College’s student information
system, MIS data archives, and/or other locally maintained data systems. Per the recommendations of
the Statewide Academic Senate, these metrics are:
clearly stated,
contain uniform measures applied to all programs,
are based on trends over time, when applicable, and
relate to program goals and the mission of the college
These recommendations were developed to be straightforward, yet meaningful measures of
instructional programs to faculty and administrators. The key performance indicators outlined in the
subsequent section are not intended to replace the comprehensive instructional program review
process; instead, these items speak directly about criteria to consider for “early alert” about a program’s
access, productivity, and outcomes. These quantitative metrics should be contextualized by qualitative
narratives, where appropriate.
1
Instructional Programs are locally defined.
Page 1
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
Early Alert Scoring Process___
Scoring programs for early alert of “at risk” status is operationalized into eleven2 metrics that measure
enrollment/access/demand or productivity/efficacy. Table 1 summarizes the categories of risk factors,
the metrics to consider, the threshold measures to use, and the importance rating. Program-level data
for each metric will be used to determine if the program is above or below the recommended threshold
measure. Points are accumulated toward “at risk” status when a program does not meet the
recommended threshold measure for a metric.
Table 1. Early Alert Scoring Summary
Category
Enrollment/
Access/
Demand
Productivity/
Efficacy
Student
Outcomes
Metric
WSCH
Enrollment Head Count
Enrollment Seat Count
Section Count
Cost per FTES
WSCH per FTEF
Capacity Rate
SLO Assessment Results
Success Rate
Retention Rate
Awards Count
Threshold Measure
Importance Rating
for Risk Factor
+/- 10% comparison
3
+/- 10% comparison
2
+/- 10% comparison
2
+/- 10% comparison
1
cost > FTES revenue
3
< 500
3
< 80%
1
----- 10% comparison
2
- 10% comparison
1
- 10% comparison
1
“Early Alert” Score Total
19
If a program is below a threshold measure, it will be assigned points from the “Importance Rating for
Risk Factor” column. The ideal siuation for a program is to earn zero points on any metric, and the scale
below can be used to summarize the risk factor and next steps for each program.




Scores less than 3 = no “early alert” review necessary
Scores from 3 to 5 = possible “early alert” program; PIC protocol to determine next step
Scores from 6 to 9 = probable “early alert” program; PIC protocol to determine next step
Scores above 9 = definite “early alert” program; PIC protocol to determine next step
The quantitative nature of this system will also allow programs to see if their overall score is greater or
less than scores from previous years. This secondary measure that can help programs determine if, how,
and in which categories they are improving.
Recommended Metric Overview__
2
10 metrics, plus a placeholder for Student Learning Outcomes assessment results.
Page 2
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
The metrics for early alert in the program improvement process fall into three categories (Figure 1).
Adequate access to, effectiveness of, and outcomes for instructional programs are all necessary
components to consider when measuring service to students. In what follows, each of the individual
metrics identified in Figure 1 are defined, the rationale for inclusion are explained, thresholds for
measures are recommended, and each metric is weighted to account for relative importance.
Figure 1. Program Improvement Metrics
*SLO Assessments Results are included as a placeholder at this time. As SLO assessment outcomes tracking becomes
institutionalized, Shasta College can revisit the inclusion of this metric in measuring student outcomes/results.
Some of the recommended measures are fixed and some are comparative. When a measure is fixed, a
specific threshold is stated with accompanied rationale for the number provided. When a measure is
comparative, a specific methodology for comparison is provided. The reason for including both fixed and
comparative measures is to account for the diverse and comprehensive nature of metrics used in this
process; some metrics are best considered in light of past performance (e.g., enrollment counts) and
some metrics are basic standards of service that should be at a certain level regardless of past
performance (e.g., capacity rates).
Weight/Relative Importance ranks are based on a three point scale, where 1=somewhat important,
2=important, and 3= very important. These rank numbers are the basis for the scoring system (see Table
1) that has been developed for the early alert identification process. When a program does not meet the
threshold set for a particular metric, it is assigned the numeric score of weight/relative importance rank
of that metric. (E.g., a program not meeting the WSCH threshold is assigned a score of 3).
Page 3
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
ENROLLMENT/ACCESS/DEMAND
WSCH
Operational Definition
WSCH: Weekly Student Contact Hours (WSCH) is a measure of apportionment, whereby student
enrollment and weekly instructional contact hours are multiplied. This is an intermediary step in
calculating FTES, as apportionment is allocated based on student contact. Any course accounting
method (daily census, positive attendance, etc.) can be converted to WSCH to use as the
standardized metric.
Rationale for Inclusion
WSCH: This measure takes into account enrollment as well as classroom hours and is a more
comprehensive measure than just looking at enrollment counts, section counts, or other
individual metrics. WSCH is widely used3 as a standard measure of program demand.
Threshold
WSCH: The threshold for this metric is a 10% increase or decrease4 in WSCH when comparing
the WSCH of one term to the average aggregated WSCH of the previous three terms.
Weight/Relative Importance
WSCH: Importance rating for risk factor = 3; very important.
Enrollment Counts
Operational Definition
Enrollment Head Count: Unduplicated count of students enrolled in a program at first census5.
Enrollment Seat Count: Duplicated count of student enrollments in a program at first census.
Rationale for Inclusion
Enrollment Head Count:: Head count enrollment is a fundamental measure of access and
demand for a program. Head count is an indicator of waxing or waning in program interest by
students.
Enrollment Seat Count: Enrollment count is also a fundamental measure of access and demand
for a program. Reviewing enrollment count along with head count is useful because it speaks to
the program interest by students, as well overall enrollment contribution (via apportionment) to
the College.
3
The RP Group for CCCs includes WSCH as a key indicator for program review and evaluation.
Some thresholds are set using +/- change to account for possible review of programs that are either growing or
shrinking substantially in a short period of time. Program growth can be as concerning as growth decline if the
growth is not planned or managed well.
5
Census date is a traditional date used to capture enrollments because it is the date in which apportionment
figures are computed for most courses; however, some programs may have core courses that are not traditionally
scheduled and use alternative accounting methods. The College will need to determine if the first census date or
the end of term enrollments are most appropriate to use. If end of term data are used, all students receiving a
valid grade that will be submitted with MIS data should be used in the count.
4
Page 4
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
Threshold
Enrollment Head Count: The threshold for this metric is a 10% increase or decrease in head
count when comparing the head count enrollment of one term to the average aggregated head
count enrollment of the previous three terms.
Enrollment Seat Count: The threshold for this metric is a 10% increase or decrease in seat count
when comparing the seat count enrollment of one term to the average aggregated seat count
enrollment of the previous three terms.
Weight/Relative Importance
Enrollment Head Count: Importance rating for risk factor = 2; important.
Enrollment Seat Count: Importance rating for risk factor = 2; important.
Section Counts
Operational Definition
Section Count: Unduplicated count of course sections offered within a program.
Rationale for Inclusion
Section Count: A secondary measure of program demand; change in the number of sections
offered is an indicator of demand of courses by students.
Threshold
Section Count: The threshold for this metric is a 10% increase or decrease in sections when
comparing the number of sections offered in one term to the average aggregated number of
sections offered in the previous three terms.
Weight/Relative Importance
Section Count: Importance rating for risk factor = 1; somewhat important.
PRODUCITIVITY/EFFICACY
Cost per FTES
Operational Definition
Cost per FTES: The overall operating cost6 of a program divided by the FTES generated by the
program will yield how much funding it takes to generate one FTES. Since courses defined within
a program may not be mutually exclusive of courses required in other programs, two
methodology options for calculating program costs are offered: 1) if accurate and available, use
the budget string program codes to identify the direct costs of a program, or 2) calculate and
use estimates of personnel costs based on respective averages for full-time and part-time
6
Direct costs are typically available through the Business Office’s standard accounting methodology, but indirect
costs are somewhat of a challenge to capture. Indirect costs can be calculated using a multiplier supplied by the
Business Office to estimate overhead expenses. This figure is different for each institution, but is usually available
in the form of direct cost + x% overhead = total operating costs.
Page 5
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
instructors costs, plus other direct expenses based on the estimated proportion of a shared
budget.
Rationale for Inclusion
Cost per FTES: Cost-effectiveness should not be the only consideration in determining program
improvement status, but it is an important metric to consider when using state or federal funds
to offer a program. Multiplying the FTES by the College’s average reimbursement rate (around
$4,500) will yield a cost to revenue ratio.
Threshold
Cost per FTES: Any program that costs more than what it generates in revenue from FTES (or
other revenues) does not meet the threshold for this metric7.
Weight/Relative Importance
Cost per FTES: Importance rating for risk factor = 3; very important.
WSCH per FTEF
Operational Definition
WSCH per FTEF: Weekly Student Contact Hours [this WSCH can be calculated for courses with
any accounting method by dividing the semester FTES by 525 by the semester weekly multiplier,
(i.e., 17.5 weeks)] divided by the Full Time Equivalent Faculty number (portion of a full time load
that each class represents) yields how much of a faculty load it takes to generate a WSCH. Divide
WSCH by FTEF to compute this productivity measure. E.g., a 3-unit lecture class that meets for 3
hours a week with 34 students enrolled will yield 102 weekly contact hours. If faculty load for
this is 20% (a full load is teach five 3-unit lecture classes). The WSCH per FTEF calculation is
102/.2 = 510.
Rationale for Inclusion
WSCH per FTEF: This measure is commonly used by Chief Instruction Officers and is
acknowledged by the RP Group as a measure of productivity.
Threshold
WSCH per FTEF: The threshold for this metric is 5008; if WSCH/FTEF is less than 500, a program
does not meet the threshold.
Weight/Relative Importance
WSCH per FTEF: Importance rating for risk factor = 3; very important.
Capacity Rate
7
By design and nature some programs will not meet this threshold, however, if the program serves a community
need (e.g., a clinical nursing program), a supplemental narrative can articulate the particular circumstances of the
program.
8
This figure roughly translates into around 33 students enrolled at census in a 3 unit lecture class with a weekly
census accounting method. Note that some programs may have courses, like labs or clinicals that restrict
enrollment below this threshold. If these programs are flagged in this process, a supplemental narrative can
address this issue.
Page 6
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
Operational Definition
Capacity Rate: Also known as "fill rate," which is calculated by taking the number of students
enrolled at first census and dividing by the maximum course enrollment. This metric is an
indicator of how full a class is, and the efficacy of this measure is contingent on having
accurately managed max course enrollments. Compute fill rates based on first census
enrollment for weekly census classes.
Rationale for Inclusion
Capacity Rate: Fill rates can indicate that a course is operating at, above, or below capacity.
This is an important metric to consider when evaluating a program.
Threshold
Capacity Rate: The threshold for this metric is 80%9 of max capacity per the term schedule
documentation; programs with less than an 80% percent fill rate do not meet the threshold.
Weight/Relative Importance
Capacity Rate: Importance rating for risk factor = 1; somewhat important.
STUDENT OUTCOMES
Success Rate
Operational Definition
Success Rate: Percent of students successfully completing courses within a program to yield a
measure indicative of student learning. Use grades of A, B, C, and CR in the numerator and all
other grades in the denominator.
Rationale for Inclusion
Success Rate: The success of students is a value stated in the mission of the College, and the
aggregate success rate of a program is an essential metric to consider the effectiveness of a
program.
Threshold
Success Rate: There is no published statewide threshold for minimal or optimal student success rate.
However, by comparing program’s success rate from one term to the average aggregated success rate of
the previous three terms the College will become aware of any major changes in this metric. The
threshold for success rate set a 10% decrease in success for the compared terms.10Weight/Relative
Importance
Success Rate: Importance rating for risk factor = 2; important.
Retention Rate
9
The RP Group for CCC’s acknowledges that there is an expected attrition rate (about 10%) by the first census
date. The 80% capacity rate recommended here is based on a figure that is twice the expected 10% attrition rate.
10
Note: since success rates are based on grades, and since a letter grade of ‘C’ is the culturally accepted minimum
passing grade, the college could set a static threshold for success rate is at least 75% (the ‘C’ grade midpoint) In
lieu of the comparative threshold depicted in this recommendation.
Page 7
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
Operational Definition
Retention Rate: Percent of students retained in program courses to the end of the term out of
the total enrolled in program courses. Use grades of A, B, C, D, CR, and I in the numerator11 and
all other grades in the denominator.
Rationale for Inclusion
Retention Rate: Retention of students in a program is a prerequisite to student success.
However, since students drop courses for a variety of reasons, some of which the College has no
control over, this measure is included as a metric only in a comparative context.
Threshold
Retention Rate: The threshold for this metric is a 10% decrease in retention when comparing the
retention rate of one term to the average aggregated retention rates of the previous three
terms.
Weight/Relative Importance
Retention Rate: Importance rating for risk factor = 1; somewhat important.
Degree/Certificate Count
Operational Definition
Degree/Certificate Count: Unduplicated count of degrees and certificates awarded from a
program.
Rationale for Inclusion
Degree/Certificate Count: A secondary measure of program outcomes; a decline in the number
of awards offered is an indicator of success of students.
Threshold
Degree/Certificate Count: The threshold for this metric is a 10% decrease in awards when
comparing one academic year to the average aggregated award counts of the previous three
years.
Weight/Relative Importance
Degree/Certificate Count: Importance rating for risk factor = 1; somewhat important.
Recommended Placeholder for Future Metric: Student Learning Outcome
Assessment Results
Operational Definition
SLO Assessment Results: Aggregate results of course-level SLOs from faculty for courses within a
program, or, if available, summative program SLO results to yield a direct measure of student
learning.
11
The RP Group for CCC’s uses A, B, C, D, CR, and I for the numerator in this metric and is recommended for
comparing retention rates between departments within a college. The CCCCO has a broader definition for the
numerator in calculation student retention in the Data Mart: A,B,C,D,F*,CR,NC,I*,P,NP. This methodology was
developed for the Partnership for Excellence accountability measures and is recommended for use in comparing
colleges to colleges and colleges to statewide retention figures.
Page 8
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
Rationale for Inclusion
SLO Assessment Results: SLO Assessments are a direct measure of student learning, and are
explicitly tied to accrediting language.
Threshold
SLO Assessment Results: There is no published statewide threshold for minimal or optimal SLO
results. However, since SLO assessment results are numeric, they can be quantified into percents.
These percents can be interpreted as “grades” so the recommended threshold for SLO
assessment results is at least 75% (the ‘C’ grade midpoint). Programs with SLO assessment
results of less than 75% do not meet the threshold.
Weight/Relative Importance
SLO Assessment Results: Importance rating for risk factor = 3; very important.
Summary of Weighted Metrics__
Each metric is described above with regard to relative importance in calculating a programs “early alert”
status. These metrics are graphically depicted in Figure 2.
Figure 2. Weighted Risk Factor Summary
Weighted Risk
Factor of 3x
Weighted Risk
Factor of 2x
Weighted Risk
Factor of 1x
*SLO Assessments Results are included as a placeholder at this time. As SLO assessment outcomes tracking becomes
institutionalized, Shasta College can revisit the inclusion of this metric in measuring student outcomes/results.
Resources Consulted__________
The Academic Senate for California Community Colleges. Program Discontinuance: A Faculty Perspective.
Sp. ‘98.
Page 9
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College
Version 2.0
The Academic Senate for California Community Colleges. Program Review: Setting the Standard. Sp. ‘09
CCCCIO and CCCAOE. Program Discontinuance; Sample Procedures.
www.asccc.org/Events/VocEd/2007/Program_Discontinuance_Models.doc. Accessed on 7/01/10.
The Center for Student Success. Inquiry Guide: Maximizing the Program Review Process.
http://www.rpgroup.org/sites/default/files/BRIC%20Inquiry%20Guide%20-%20Program%20Review.pdf.
Accessed 7/20/2010.
The Research and Planning Group for California Community Colleges. Institutional Research Operation
Definitions document. http://ftp.rpgroup.org/documents/OperationalDefs-RPGroup_Approved.pdf.
Accessed 7/15/2010.
Page 10
Predella Research Group – Program Improvement Early Alert Metrics for Shasta College