Instructions for Best Practice Authors:

Chapter 9:
Demystifying the Program Review Process:
Ensuring Quality Control, Fostering Faculty Development, and Motivating Faculty
Dacia Charlesworth, Ph.D.
Purpose and Preview
For most faculty, the task of developing and implementing an assessment program, either
at the institutional or departmental level, seems daunting. This becomes especially true when
faculty are often asked to develop and implement assessment plans in addition to their regular
duties of teaching, scholarship, and service. While some institutions may count assessment work
as university/college or departmental service, the category of service often is relegated to a much
lower status compared to teaching and scholarship; thus, even when faculty are rewarded for
participating in assessment, the reward often is less meritorious than work completed within the
categories of teaching and scholarship. This lack of reward fosters some faculty members’ sense
of frustration with the assessment process and may encumber faculty buy-in. Despite the lack of
a strong reward structure, faculty recognize that assessment has become the mandated norm in
higher education and must be performed.
As an integral part of the assessment process, program reviews should be of utmost
importance to faculty members since, as Angelo (1995) predicts, assessment will help us to
create a shared academic culture committed to improve the quality of higher education. A
program review is defined as the “… assessment of a program using specific and systematic
procedures which result in findings that are useful to decision makers for the purpose of helping
them better shape and achieve their goals” (Hendricks, 1992-93, p. 66). Program reviews can be
Dacia Charlesworth
useful in allowing departments to demonstrate their strengths and discuss their needs; however,
they also can be intimidating because they include departmental weaknesses.
Program reviews serve an important purpose in the overall assessment process and are
required by institutions and accrediting associations alike. Some of the most challenging aspects
of programmatic reviews are determining which assessment measures to use and formulating
program review templates. In determining templates and measures, it is vital to note that a
program review must consist of more than a collection of data; a program review allows the
department to “gain added insight into such matters as the nature of its faculty’s workload, its
scholarly productivity and the nature of and basis for its program demand” (Cole, 1996, p. 7).
The purpose of this chapter is to provide meaningful information about the process of program
review so that communication departments can avoid replicating efforts and will be able to
conduct successful program reviews. First, the value and role of program reviews within higher
education are discussed. Second, recommendations for and limitations of conducting program
reviews are presented. Next, the type of data that can be obtained from program reviews are
examined. Finally, some challenges and benefits of the program review process are explored.
Value and Role of Program Reviews
As noted above, program reviews play an integral role in the assessment process, with
data and results derived from program reviews often forming the basis of institution-wide
assessment reports. At its best, a successful program review “assesses learning and academic
achievement of students, identifies methods of instruction to enhance student learning, and
provides a comprehensive tool for review of appropriate program information from alumni,
employers, graduate schools, and students” (Hugenberg, 1997, pp. 3-4). In discussing the value
Dacia Charlesworth
and role of program reviews, three areas of interest emerge: the primary functions of program
reviews, the fundamental tenets of assessment, and the benefits of program reviews.
Primary Functions of Program Reviews
Sergiovanni (1987) identifies three primary roles and subsequent values of conducting
program reviews. The first role is insuring quality control. Program reviews insure that the
goals of the program are consistent with the program’s mission and values. The second
important function of program reviews is aiding in professional development. Program reviews
allow individuals involved in developing and implementing assessment programs to grow
personally and professionally by continually expanding and enhancing their own knowledge,
especially within their own discipline. The last role program reviews perform is motivating
individuals involved in developing and implementing the assessment plan. In addition to
contributing to faculty professional development, conducting a program review builds and
nurtures faculty motivation and commitment to departmental mission and goals, and also the
program itself. Moreover, by addressing the success of a program, program reviews inform
future planning decisions. Essentially, “A well constructed and conducted evaluation is a service
to the organization, its stakeholders and its clients. Evaluation serves the needs of a program by
providing information that is useful for making practical choices regarding quality and
effectiveness” (Hendricks, 1992-1993, p. 65).
Fundamental Tenets of Assessment
For any aspect of assessment to be effective, it must be faculty-owned, be viewed as an
opportunity, make use of existing data, occur in a climate of trust, and experience institutional
utility (Higgerson, 1993). To help guide departmental faculty developing an assessment
program, which will comprise a significant portion of the program review, it is useful to review
Dacia Charlesworth
the fundamental tenets of assessment put forward by the American Association of Higher
Education (AAHE) and the National Communication Association (NCA)—especially those
tenets relevant to those of us in the communication discipline. The following list combines the
nine principles offered by AAHE (Astin et al., n.d.) with the 10 hallmarks of a successful oral
communication assessment program espoused by NCA (n.d., Hallmarks). The following 11
tenets should be considered by faculty developing a departmental assessment plan.
Successful communication assessment practices work best when they:
1. Seek to improve programs via clear, explicitly stated purposes stemming from
the institution’s mission and departmental goals.
2. Attend to outcomes but also, and equally, to the experiences that lead to those
outcomes.
3. Make a difference by focusing on issues of use and illuminating questions
people really care about.
4. Are ongoing, not episodic.
5. Are marked by faculty ownership and responsibility.
6. Reflect an understanding of learning as multidimensional, integrated, and
revealed in performance over time.
7. Are part of a larger set of conditions that promote change (i.e., institutional
support for assessment exists).
8. Foster wider improvement by involving representatives from across the
educational community.
9. Rely on multiple measures and are cost-effective.
10. Provide feedback to the students and the instructors.
11. Include a process for evaluating the assessment program.
Benefits of Conducting Program Reviews
Dacia Charlesworth
Understandably, the process of conducting a program review might seem intimidating:
The department is opening itself up to criticism and must face inadequacies that may appear
within the curriculum, some faculty might even fear that the assessment process will single out
their performances in the classroom and as scholars. It is important to note, however, that the
many benefits accompanying a program review outweigh such concerns. As Hugenberg notes:
“A comprehensive assessment program will lead to the ongoing development of excellence in
communication departments in pursuit of the college or university’s mission” (1997, p. 3).
Moreover, clearly outlined assessment reports with comprehensive program reviews demonstrate
that departments know the parameters within which to work and, subsequently, enable
departments to reap rewards; assist departments in securing more institutional resources; and,
when they involve external reviewers who include input from faculty and administrators beyond
the departments under review, inform departments how they are perceived on campus (Haley &
Jackson, 1994, p. 5).
The National Communication Association (n.d., Guidelines) also explores how
assessment may benefit students, institutions, and faculty. When assessment is conducted
properly, students receive a more dynamic and enhanced education. In addition, receiving
feedback concerning their performance on measured outcomes allows students to monitor their
own achievements. Institutions obviously benefit by ensuring a unified mission and all members
of the institution gain a better understanding of what students are expected to learn. Finally,
assessment can lead to positive reform that can lead to a more committed and enthusiastic
faculty. Combining these benefits with the possibilities for enhancing faculty professional
development and increasing faculty motivation, faculty should view program reviews as an
Dacia Charlesworth
opportunity to highlight the strengths of their programs and strongly influence the future of their
programs, while simultaneously improving student learning and their institutions in general.
Application
If conducted properly, program reviews have the potential to benefit departments and
their faculty. This section offers guidelines for conducting program reviews, suggestions for
administering program reviews, limitations of program reviews, and templates for conducting
program reviews.
Guidelines for Conducting Program Reviews
As communication scholars, we realize how vital it is that members of an organization
understand the purpose of a task before it is undertaken; the same is true for program reviews.
As Hendricks notes, “Because of widely disparate and sometimes conflicting intentions within
an organization[,] it is imperative for all stakeholders concerned to be clear regarding the actual
purpose the program review process” (1992-93, p. 67). Questions that departmental members
should answer before conducting a program review include the following:

What is the purpose of the program review?
o What do we want to accomplish or find out?
o How will the program review results help us in this regard?

Who will conduct the program review?
o Who can accomplish the stated purpose in an efficient and persuasive
manner?

What form will the finished report take and who will see it?
o What resulting action will be taken and who will be responsible for
implementation and follow up?
In some cases, the institution will provide answers to these questions; however, it is imperative
that the department head or the individual overseeing the assessment program review develop
answers to these questions and communicate those answers to the entire departmental faculty.
Dacia Charlesworth
Once the questions above have been answered, and before a program review occurs, the
department needs to focus on developing or revisiting the departmental and/or programmatic
assessment plan. The National Communication Association (n.d., Departmental) offers the
following guidelines for conducting program reviews:

Assessment programs must focus on all academic programs, both undergraduate and
graduate.

Assessment program goals should be reviewed/developed by all faculty members.

Assessment plans should use a conceptual framework for assessment: cognitive,
affective, and behavioral. Student knowledge, attitudes, and skills should be
assessed.

Assessment goals must be evaluated by an appropriate assessment technique
(although one technique may address several goals). Every assessment technique
must generate information and findings. Findings must have an interpretation and be
of use in some relevant way.

Assessment plans should include multiple measures to address the three domains of
the conceptual framework (cognitive, affective, and behavioral).

Assessment plans should be reevaluated after each cycle and, based upon assessment
findings, units should determine changes to be made to teaching, learning, and the
curriculum.
In addition to these guidelines, another consideration must be who will administer the
program review:
The key to the implementation of the program assessment process is to have all data
collection clearly assigned to responsible faculty members, with one person in charge of
assimilating all of the data. This allows everyone to be involved, responsible, and
accountable for program assessment while having it organized so that everything is done
in a timely, systematic manner. (Diers & Vendrely, 2002, p. 258)
Most institutions delineate the roles individuals will play in program reviews. Usually, the dean,
department chairperson, department faculty, external reviewers, and institutional from other
disciplines participate in programmatic reviews, making it imperative that each person
understand his or her role in order for the process to operate smoothly. Using these guidelines,
Dacia Charlesworth
departments should be able to conduct successful program reviews yielding useful information
for continuous quality improvement.
Suggestions for Conducting Program Reviews
Though the outcomes of program reviews vary institution by institution, the ways in
which program reviews are conducted are fairly standardized. The following ten steps provide
an outline for conducting a comprehensive program review.
Define/review/revise the departmental mission statement. Be sure that the
departmental or programmatic mission is interconnected to the institutional mission.
Develop departmental learning outcomes. When developing departmental learning
outcomes, both the National Communication Association and the North Central Association
support using outcomes that measure student learning within three domains: cognitive
(knowledge acquisition), behavioral (skills acquisition), and affective (attitudinal development).
The cognitive domain of learning is concerned with knowledge and understanding. At
the lowest level, this domain focuses on specific facts. At the middle level the cognitive domain
focuses on principles and generalizations. At the highest level of cognitive learning, the focus is
on synthesis and evaluation of what has taken place at the lower levels. This domain
encompasses the content of a field. Examples of measures that assess the cognitive domain
include pre- and post-tests of critical thinking and paper-and-pencil tests of cognitive content
essential to the discipline.
The behavioral domain of learning is concerned with psychomotor skills. Skills are
viewed as the ability of an individual to perform certain behaviors. Skills can be learned and
possessed by the learner, and then can be demonstrated through performance as observable
behaviors. This domain encompasses the ability to perform a specific task that demonstrates
Dacia Charlesworth
cognitive learning has occurred. As such, inferences about cognition can be made through
observable behaviors. Examples of measures focusing on the behavioral domain include
evaluation of observed presentation skills, writing skills, and interpersonal skills, and often
involve pre- and post-program measures of skill acquisition or improvement.
The affective domain of learning is concerned with the attitudes and feelings of the
learner regarding knowledge and behaviors acquired in the other two domains. This domain
encompasses attitudes toward what has been learned cognitively and motivation to perform
learned behaviors. Examples of measuring the affective domain include student satisfaction
surveys, student reflections or journals, alumni surveys, and employer surveys. In many
academic environments, affective learning is incidental to both cognitive and behavioral
learning. Because communication departments are interested in their students’ apprehension
levels, willingness to communicate, and confidence in the use of their communication skills the
affective domain should play a prominent role in assessment.
Develop (or review) the long-range plan based on learning outcomes. In order for the
departmental assessment plan to be successful, it must be systematic (carried on using step-bystep procedures and occurring at regular intervals), ongoing (occurring on a regular basis in
stages), and dynamic (marked by continuous discussion and change).
Determine the outcome(s) to assess, define performance criteria or standards, and
identify appropriate assessment methods. Referring to the fundamental tenets of assessment
discussed above, recall that multiple measures should be used to assess student outcomes.
Several options exist for assessing student learning; two of the most important include
formative/summative assessment and direct/indirect evidence. Suskie (2004) distinguishes
between formative and summative assessment by noting that formative assessment occurs during
Dacia Charlesworth
the semester to improve teaching and learning (e.g., student reflections about the course, student
performances on exams), so students receive immediate feedback from the instructor.
Summative assessment occurs at the end of the semester or program to document student
learning (e.g., student portfolio, alumni surveys), so students may or may not receive feedback
regarding their performance. Faculty also need to determine which outcomes to assess by direct
and indirect evidence. Direct evidence includes student displays of knowledge learned (e.g., oral
presentations, writing samples), whereas indirect evidence indicates students are learning, but
evidence of what they are learning is less clear (e.g., placement ratings, student honors/awards/
scholarships) (Suskie, 2004, p. 95).
Develop a plan and timeline for carrying out the assessment. During this portion of
the review, faculty need to consider how many courses will be assessed, how many students will
be assessed, when the assessment will occur, and who will conduct the assessment. For
example, faculty at the University of Tennessee, Knoxville, conduct an internal program review
once every seven years using a panel of internal and external reviewers. Three members of a
review team are selected with input from other colleges and departments within the university.
Additionally, one or two members are selected from other universities with similar programs
(Haley & Jackson, 1994).
Implement the assessment plan. Be sure that all departmental members are
instrumental in determining the timeline of the assessment program so that everyone feels a part
of the process. Of course, institutional guidelines, if they exist, will help frame the timeline.
Evaluate the results. Compare results to departmental objectives, noting which
objectives met or failed to meet departmental expectations. This is also the time to evaluate the
Dacia Charlesworth
assessment plan as a whole and adapt the plan accordingly: Do departmental objectives need to
be revised? Do the measures used to assess the outcomes need to be changed?
Determine an action plan based on the results. After reviewing the results, decide
whether or not the assessment plan and selected measures should continue on the same path or if
they should implement corrective action. As with all steps in the process, be sure to
communicate the results to all members of the departmental faculty so that everyone has the
opportunity to not only review the results but also to suggest additional changes to the
assessment plan, additional resources that may be needed to continue conducting assessments,
and additional curricular or course changes that may need to be made.
Document and communicate results with appropriate stakeholders. Before
assessment reports are finalized, departmental members should remind themselves of the
purposes of the plan and of the audiences who will receive the results. Suskie (2004) also
suggests that faculty consider how much information the audience needs (e.g., Is the audience
already familiar with assessment? Is the audience able to interpret empirical research, or is an
explanation needed?), how the audience prefers to receive assessment information (e.g., Is the
audience more likely to process information in the form of text, numbers, or graphs? Is the
audience more likely to favor a report that is detailed or brief?), how the audience will interpret
the report (e.g., Is the audience likely to feel threatened by the report and criticize it?), and how
the audience is expected to respond once they have read the report (e.g., Is this audience
responsible for reporting to another audience? If so, does your primary audience have enough
information to answer others’ questions?) (pp. 281-282). Based on the answers to these
questions, more than one report may need to be created for various audiences (e.g., college
administrators, accreditation review teams, advisory boards, alumni, and parents).
Dacia Charlesworth
After finalizing the departmental assessment plan, departments are ready to conduct a
program review. Most institutions will provide faculty with the necessary steps and a timeline
for conducting a program review; however, if this is not the case, one institution’s procedure that
is representative of many program review processes is presented. Minnesota State University,
Mankato uses a four-step program review process: a Self-Study, a Campus Review involving
faculty from within a division/school, an External Review involving experts in the discipline
from other institutions, and an Action Plan identifying steps the department will take based on
the self-study and on recommendations made by the reviewers. Other resources that provide
such templates include Nichols and Nichols books entitled The Departmental Guide and Record
Book for Student Outcomes Assessment in Institutional Effectiveness and A Road Map for
Improvement of Student Learning and Support Services Through Assessment.
Limitations of Conducting Program Reviews
Although program reviews serve as a mechanism to improve the program and to secure
more program resources and recognition, limitations do exist. One issue that may arise concerns
how faculty are compensated for conducting the extra work associated with assessment. In some
cases, faculty members involved in substantial assessment work receive release time; in others,
assessment is counted as a significant contribution in the category of service and faculty are
rewarded through merit pay.
Faculty resistance tends to stem from various concerns. Departmental faculty members
may be unaccustomed to the amount of teamwork and time necessary to design and implement a
successful assessment program (Peitus & Smith, 1991). Faculty and staff may feel that they do
not have time to conduct assessment, may resent the disruption of programming, may fear the
assessment process, may not understand the payoff, and may not understand who will benefit and
Dacia Charlesworth
how (Hendricks, 1992-93, p. 69). Fortunately, using embedded or formative assessment helps to
alleviate some of these issues as assessment occurs within the context of the classroom.
Another limitation associated with assessment is cost. Hugenberg (1997) suggests that
supplemental funds be available for assessment, rather than having departments “make do” with
current budget structures. Assessment costs are usually offset through departmental budget lines
and some schools have college or university assessment offices that distribute funds to faculty
members conducting assessment. In addition, university and college Grants and Sponsored
Research Offices may also assist departments in securing funds to conduct assessment from
external agencies.
Another limitation of program reviews centers on students’ willingness to participate in
assessment measures. Faculty must take care when creating assessment plans to ensure against
student burnout from repeatedly completing measures. Thus, programs that test students every
five years, survey students every fourth year, compile portfolios every third year, and assess oral
presentations and written work every second year are very sound and are more likely to result in
greater compliance.
Program Review Templates
While each institution is likely to have its preferred presentation of program review
materials, the following three tables illustrate various approaches to program review. Table 1
features Appalachian State University’s program review template. An interesting feature of
Appalachian State’s review features the question involving critical mass and the impact on the
university if the program is eliminated. While this feature may frighten program faculty and
staff, this area allows departments to establish the absolute importance of a program within a
particular institution.
Dacia Charlesworth
(Insert Table 1 here)
Table 2 features the University of Tennessee, Knoxville’s template for conducting a selfstudy. The focus on teaching, and whether or not mentoring takes place for ineffective teaching,
is especially telling as it indicates the institution’s emphasis on the importance of teaching.
(Insert Table 2 here)
Illinois State University’s self-study review guidelines are listed in Table 3. A salient
feature of these guidelines is the inclusion and focus on the “Student Learning Outcomes
Assessment Plan”. This university correctly highlights the importance of the departmental
assessment plan by having it included in its entirety as an appendix.
(Insert Table 3 here)
Data Used in Program Reviews
Data collected for program reviews are entirely dependent on the department’s
assessment plan. The following measures typically are included in program reviews: a
department comprehensive test administered to incoming students and/or graduating seniors,
internship evaluations completed by sponsoring organizations, senior surveys, alumni survey ,
employer surveys, capstone course projects, portfolio assessment, student focus groups, and
student preparation for and success in graduate school. It is important to note that indirect
measures such as alumni surveys cannot be sued to assess student learning but provide valuable
feedback about the program nonetheless. Whatever data are used for the program review, the
needs of the audiences must be considered and information should be presented in a manner that
is most appropriate for them.
Conclusion
Dacia Charlesworth
As a form of assessment, program reviews allow faculty the opportunity to continually
improve their curricula, teaching, and professional development. Program reviews are also
excellent modes for departments to receive institutional rewards and recognition. The keys to a
successful program review are having a strong departmental assessment plan and selecting
highly-organized individuals to administer the program review. As communication scholars, we
are especially well-prepared to conduct successful program reviews, given our knowledge of
audience analysis. Considering the constraints placed on faculty members’ time, it is essential to
understand and communicate that program reviews are one valid way for us to create a shared
academic culture committed to improving the quality of higher education.
Dacia Charlesworth
References
Angelo, T. A. (1995). Reassessing (and redefining) assessment. AAHE Bulletin, 48, 7-9.
Astin, A. W. (n.d.). AAHE nine principles of good practice for assessing student learning.
Retrieved April 13, 2006 from http://129.219.216.161/assess/9principles.html.
Cole, T. W. (1996, March). Evaluating effectiveness through program assessment. Paper
presented at the annual meeting of the Southern States Communication Association,
Memphis.
Haley, E., & Jackson, D. (1994). Responding to the crisis of accountability: A review of
program assessment methods. Paper presented at the annual meeting of the Association
for Education in Journalism and Mass Media, Atlanta.
Hendricks, B. (1992-1993). Moving ahead: Program review and evaluation as tools for growth.
Proceedings of the 1992 and 1993 Conferences on Outdoor Recreation, 65-72.
Higgerson, M. L. (1993). Important components of an effective assessment program. Journal
of the Association for Communication Administration, 2, 1-9.
Hugenberg, L. W. (1997). Assessment of student learning and program review: Data for
continuous improvement. Paper presented at the annual meeting of the National
Communication Association, Chicago.
Lopez, C. (1999). A decade of assessing student learning: What have we learned; what’s next?
Chicago: North Central Association of Colleges and Schools.
National Communication Association. (n.d.). Departmental guidelines. Retrieved March 17,
2006 from http://www.natcom.org/Instruction/assessment/Assessment/guidelines.htm
National Communication Association. (n.d.). Hallmarks of successful oral communication
assessment programs. Retrieved March 17, 2006 from
http://www.natcom.org/Instruction/assessment/Assessment/hallmarks.htm.
Nichols, J. O., & Nichols K. W. (2005). A road map for improvement of student learning and
support services through assessment. New York: Agathon Press.
Nichols, J. O., & Nichols, K. W. (1995). The departmental guide for student outcomes
assessment and institutional effectiveness. New York: Agathon Press.
Pietus, A. M., & Smith, W. D. (1991). Program assessment for a teacher education program.
Education, 112, 288-295.
Sergiovanni, T. J. (1987). The principalship: A relative practice perspective. Toronto: Allyn
and Bacon, Inc.
Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker
Publishing Company, Inc.
Dacia Charlesworth
Author Bio
Dacia Charlesworth is Director of the University Honors Programs and an Associate Professor in
the Department of Communication at Robert Morris University. Charlesworth also served as
founding Director of the Oral Communication Across the Curriculum Program at Southeast
Missouri State University; it was through this experience that she came to recognize the value
and importance of assessment.
Dacia Charlesworth
Tables
Table 1: Appalachian State University’s Program Review Template
Centrality to Institutional Mission: Considers the department’s importance to
the institution as well as the possibilities of program consolidation or elimination.
Developing this section can be challenging since all responses to this seem selfserving; however, a claim of centrality based upon student load, class sizes, or
courses required across the curriculum might be helpful (but also readily
verifiable).
Department Overview: Describes department’s degree offerings, courses
required for other degree programs across the institution, co-curricular/honor
society/and or professional involvement, and special faculty or student
accomplishments. This section is important since it sets the tone for how the
report should be read (gives the necessary information about the program)
Faculty Workload/Reassigned Time and Scholarly Activity: Seeks to discover
how faculty are being utilized and to provide some basis for recommendations
regarding the need for additional faculty resources. Teaching load shows a
precise headcount in each course offered by the department each semester; faculty
Full-time equivalent and student credit hour statistics might also be included here.
Library Holdings: Highlights the degree to which the library is able to support
the program. Investigates library current and retrospective resources.
Facilities/Equipment: Evaluates available classroom space and equipment as
well as its appropriateness for the program. Lists necessary facilities changes/
improvements that need to be made and equipment purchases or upgrades.
Program Demand: Focuses upon a program’s viability largely from the
perspective of its student cohort. Offers demographic information such as number
of majors, number of graduates, over or under-enrolled courses, information about
job prospects for graduates, and how courses are essential for other programs.
This section helps explain why service or scholarly productivity might be low;
that is, if a department has such high demand, then the faculty can’t manage to do
all three. Overcrowded courses or over demanded courses and high scholarly
reassignment time might also indicate a need for change whereas low productivity
and low demand may put a program at risk.
Costs: Examines the number of hours graduates have (are the number of credit
hours substantially higher than the requirement?). Also focuses on the program’s
number of under-enrolled courses and the department’s cost-saving measures, if
any.
Duplication: Considers a program’s utilization of resources but from an
Dacia Charlesworth
efficiency perspective: How does a program compliment or duplicate other
departments in the institution? Course duplication is the largest offender here.
Critical Mass: Examines the impact on the primary department, secondary
departments, and the institution if the program was eliminated.
Possible Alternative
Table 1: Appalachian State University’s Program Review Template
Centrality to Institutional Mission:
Considers the department’s importance to the
institution as well as the possibilities of
program consolidation or elimination.
Developing this section can be challenging
since all responses to this seem self-serving;
however, a claim of centrality based upon
student load, class sizes, or courses required
across the curriculum might be helpful (but
also readily verifiable).
Facilities/Equipment: Evaluates available
classroom space and equipment as well as its
appropriateness for the program. Lists necessary
facilities changes/ improvements that need to be
made and equipment purchases or upgrades.
Department Overview: Describes
department’s degree offerings, courses
required for other degree programs across the
institution, co-curricular/honor society/and or
professional involvement, and special faculty
or student accomplishments. This section is
important since it sets the tone for how the
report should be read (gives the necessary
information about the program)
Program Demand: Focuses upon a program’s
viability largely from the perspective of its
student cohort. Offers demographic information
such as number of majors, number of graduates,
over or under-enrolled courses, information
about job prospects for graduates, and how
courses are essential for other programs. This
section helps explain why service or scholarly
productivity might be low; that is, if a department
has such high demand, then the faculty can’t
manage to do all three. Overcrowded courses or
over demanded courses and high scholarly
reassignment time might also indicate a need for
change whereas low productivity and low
demand may put a program at risk.
Faculty Workload/Reassigned Time and
Scholarly Activity: Seeks to discover how
faculty are being utilized and to provide some
basis for recommendations regarding the need
for additional faculty resources. Teaching
load shows a precise headcount in each course
offered by the department each semester;
faculty Full-time equivalent and student credit
hour statistics might also be included here.
Costs: Examines the number of hours graduates
have (are the number of credit hours substantially
higher than the requirement?). Also focuses on
the program’s number of under-enrolled courses
and the department’s cost-saving measures, if
any.
Library Holdings: Highlights the degree to
which the library is able to support the
program. Investigates library current and
retrospective resources.
Duplication: Considers a program’s utilization
of resources but from an efficiency perspective:
How does a program compliment or duplicate
other departments in the institution? Course
duplication is the largest offender here.
Dacia Charlesworth
Critical Mass: Examines the impact on the primary department, secondary departments, and the
institution if the program was eliminated.
Dacia Charlesworth
Table 2: University of Tennessee, Knoxville Program Review Template
Goals: Are the department’s goals clearly stated, followed, measured, and in
compliance with the goals of the university?
Curriculum: Is the curriculum well planned? Is it complementary of general
education courses? Is it balanced? Does it expose students to contested issues as
well as develop critical thinking and research skills?
Connections: Does faculty research reflect broad range of scholarly inquiry and
encourage interdisciplinary activity with the larger university community? Do the
faculty participate in university service and contribute to community service? Do
students have professional opportunities to apply knowledge beyond the
classroom?
Teaching: Is teaching quality rigorously evaluated? Is mentoring provided to new
faculty? Is good teaching valued and rewarded? Is an ineffective teacher given
assistance? Is faculty development assisted by the department?
Connecting with Students: Is effective curricular and career advising provided?
Do students have the opportunity for interaction with one another, with faculty,
with professionals?
Inclusiveness: Are faculty diverse with respect to gender, ethnicity and academic
background? Does the department provide opportunities for students to be
exposed to diversity across the discipline and seek to include perspective and
experiences underrepresented groups through curricular and extra-curricular
activities?
Support: Does the department regularly evaluate its equipment, facilities and
library holdings and encourage necessary improvements within the context of
overall university resources?
Dacia Charlesworth
Possible Alternative:
Table 2: University of Tennessee, Knoxville Program Review Template
Goals: Are the department’s goals clearly
stated, followed, measured, and in compliance
with the goals of the university?
Teaching: Is teaching quality rigorously
evaluated? Is mentoring provided to new faculty?
Is good teaching valued and rewarded? Is an
ineffective teacher given assistance? Is faculty
development assisted by the department?
Curriculum: Is the curriculum well planned?
Is it complementary of general education
courses? Is it balanced? Does it expose
students to contested issues as well as develop
critical thinking and research skills?
Connecting with Students: Is effective
curricular and career advising provided? Do
students have the opportunity for interaction with
one another, with faculty, with professionals?
Connections: Does faculty research reflect
broad range of scholarly inquiry and
encourage interdisciplinary activity with the
larger university community? Do the faculty
participate in university service and contribute
to community service? Do students have
professional opportunities to apply knowledge
beyond the classroom?
Inclusiveness: Are faculty diverse with respect
to gender, ethnicity and academic background?
Does the department provide opportunities for
students to be exposed to diversity across the
discipline and seek to include perspective and
experiences underrepresented groups through
curricular and extra-curricular activities?
Support: Does the department regularly evaluate its equipment, facilities and library holdings and
encourage necessary improvements within the context of overall university resources?
Dacia Charlesworth
Table 3: Illinois State University’s Program Review Self-Study Guidelines
Description of Self-Study Process: Provide a description of the process used to
conduct the self-study including faculty and student involvement and timeframe
for the self analysis and review.
Description and Analyses of Program: Offer an overview of the academic unit,
an overview of degree program being reviewed, curriculum of degree program
being reviewed, faculty of degree program or unit, goals and quality measures for
the program.
Response to Previous Program Review Recommendations: Provide a narrative
summary addressing the previous program review recommendations.
Program Goals and Planning Processes: Provide a summary of initiatives and
plans for program for the next three years, how these goals integrate with the
university’s strategic plan, and provide the unit plan as Appendix 2.
Executive Summary: Include an introduction summarizing the distinctive
features of the program, a summary of each component reviewed in the program
review document, a description and assessment of any major changes in the
program since the last program review, a summary of the department Student
Learning Outcomes Assessment Plan, a description of major findings and
recommendations as a result of the program review, and a description of actions
taken as a result of the previous program review.
Appendices: 1) Student Learning Outcomes Assessment Plan, 2) Strategic Plan
for unit and/or program, 3) List of national programs or national standards used
for goal setting and quality comparisons, and 4) Current faculty vitae.
Dacia Charlesworth
Possible Alternative
Table 3: Illinois State University’s Program Review Self-Study Guidelines
Description of Self-Study Process: Provide a
description of the process used to conduct the
self-study including faculty and student
involvement and timeframe for the self
analysis and review.
Response to Previous Program Review
Recommendations: Provide a narrative
summary addressing the previous program
review recommendations.
Description and Analyses of Program:
Offer an overview of the academic unit, an
overview of degree program being reviewed,
curriculum of degree program being reviewed,
faculty of degree program or unit, goals and
quality measures for the program.
Program Goals and Planning Processes:
Provide a summary of initiatives and plans for
program for the next three years, how these goals
integrate with the university’s strategic plan, and
provide the unit plan as Appendix 2.
Response to Previous Program Review
Recommendations: Provide a narrative
summary addressing the previous program
review recommendations.
Executive Summary: Include an introduction
summarizing the distinctive features of the
program, a summary of each component
reviewed in the program review document, a
description and assessment of any major changes
in the program since the last program review, a
summary of the department Student Learning
Outcomes Assessment Plan, a description of
major findings and recommendations as a result
of the program review, and a description of
actions taken as a result of the previous program
review.
Appendices: 1) Student Learning Outcomes Assessment Plan, 2) Strategic Plan for unit and/or
program, 3) List of national programs or national standards used for goal setting and quality
comparisons, and 4) Current faculty vitae.