VANCOUVER ISLAND UNIVERSITY FRAMEWORK FOR THE SUMMATIVE ASSESSMENT OF PROGRAMS INTRODUCTION Vancouver Island University’s dedication to instilling and maintaining the highest standards in instruction is made explicit in Policy 31.15 (Assessment and Review of Instructional Programs and Departments), which stipulates that the university “is committed to offering programs of high quality and standards, to the continuous improvement of its programs, and to transparency and accountability in these activities.” This framework supports quality, transparency, and accountability in the routine summative assessment of programs. The framework will be applied in the event of instructional program adjustments growing out of academic planning processes, changing educational environments, opportunities for growth, and financial exigency. It will guide the assessment and ranking of programs on the basis of statistical and qualitative information across a range of measures, along with information about institutional, departmental, and programmatic contexts. The objective of this framework is to ensure that decisions regarding the introduction of new programs, as well as the change, expansion, reduction, or discontinuation of existing programs, can be based on rigorous, fair, and consistent assessment practices. The framework does not apply to changes initiated by faculties and their faculty in the routine academic management of their programs. As referred to in this framework, a program is defined as an institutionally-approved matrix of courses leading to a provincially-recognized credential issued by Vancouver Island University. The term applies to approved certificates, diplomas, bachelor’s degrees (majors and minors), and master’s degrees. For the purposes of the summative assessment of programs, the following unweighted criteria apply: Program Context Relevance Quality Financial Performance Access Strategic Priorities The relevant assessment data for effectively addressing these criteria represent a mixture of information currently collected by the university, information the university would like to but does not presently collect (the framework draws attention to gaps in this respect), and narrative context intended to inform the fullest possible qualitative judgments. It is understood that the university will improve its collection of meaningful and reliable data to support the assessment of programs for each of the above criteria. Furthermore, both quantitative and qualitative information are recognized to be of value in the process. Each criterion is a complex concept, ultimately requiring assessment through multiple measures. This is why a number of more specific questions are attached to each criterion. When evaluating a program in terms of a particular criterion, the appropriate administrative bodies (e.g., dean; Provost Council; President’s Council) will use the available relevant assessment data and should address as many of the specific questions as possible while including any additional evidence they think is relevant. Finally, it is noted that qualitative judgments by SPA Framework Revised and approved by Senate Feb. 12, 2014 Page 1 of 6 the administrative bodies, based on the interpretation of these assessment data, should be reported and explained. PROGRAM CONTEXT The framework provides for the identification of important and distinctive aspects of a program that may not otherwise be considered in the assessment process. This information may come from departmental Signposts documentation, program instructors, or the relevant dean. The criterion is meant to bring additional nuance to judgments being formed under other criteria. In addressing this criterion, the dean and/or department should include the following information: Relevant aspects of the program’s character (e.g., the mix of part-time and full-time faculty or of temporary or continuing faculty, or the gender make-up or average age of instructors; campus delivery; course delivery [face-to-face, online, team-taught]; facilities and infrastructure; support staff); The program’s mission, vision, or philosophy; The function and purposes of the program, or its raison d’être; The stated program objectives (if different from the function and purposes); Some description of the program’s value to the community or region or to the university as a whole; An analysis of the program’s resources. If the program is not adequately resourced, why not (and what would be required to address this?)? Some discussion of the implications for students, the community or region, or the university, of leaving the program and its funding as they stand; Some discussion of the implications for students, the community or region, or the university if the program were not to exist. It may not be possible or reasonable to obtain all of the above information in any given assessment process. The overall purpose of this criterion, however, is to give an additional measure of the impact of possible change, growth, reduction, or discontinuation. It may serve to influence judgments made under other criteria by virtue of the additional context it provides. For proposed new programs, it is an opportunity to explore and understand distinctive aspects of the program that may complement existing programs or add to the university culture, or assert in more general terms its “essentiality.” Suggested Performance Measures: None. RELEVANCE Relevance is the student demand and community need for a program. In presenting evidence about program relevance, the dean and/or program should explicitly address each of the following questions: What is the level of student demand (past, present and future) for this program? Has the program delivered expected (or acceptable) number of student FTEs through consistent enrolment and retention? What are the future prospects for student FTE delivery? What are the on-going employment and education opportunities for graduates of this program? Does the program meet identified community or regional needs? How does this program contribute to the general social and cultural development of the community? VIU Framework for Summative Assessment of Programs Revised as per Senate’s Feb 2012 recommendations Page 2 of 6 Evidence on these indicators may not be in harmony. For example, there may be considerable student demand for a program while labour market studies suggest limited job prospects for graduates. The ideal is a program with large demand, many on-going employment opportunities, the support of other programs, identified community or regional need, and desirable cultural and social spin-offs. Suggested Performance Measures: Net and Percentage Change in FTE Delivery Over the Last Three Fiscal Years [data available] Net and Percentage Change in Student Headcount Over the Last Three Fiscal Years [data available] Community and Regional Needs Identification Documentation [to be developed] Urban/Rural Geography Cultural Needs Measurements [to be developed] QUALITY Quality is the capacity of a program to “add value” to students’ lives in exchange for their time and effort. Overall quality is a function of quality in curriculum, faculty, support services, equipment, and fellow students, as well as the perception of the program among employers and professional peers. The appropriate administrative bodies should address questions like the following in making their case for the overall level of quality of a program: How is the curriculum designed to meet or exceed the standards of similar programs, including matters like current technology and experiential learning? What is the professional standing of the faculty among their peers? (Or, for completely new programs: What is the ability of the program to attract the quality of faculty it wants?) What is the quality of the professional qualifications of faculty? (This question may measure the proportion of terminal degrees or the extent of related professional experience.) What quality of student is the program likely to attract? (This question recognizes that much of what students learn, they learn from other students.) How will the program provide the support that students need to succeed? Are there facilities, learning resources and technology of sufficient quality for the program? (For existing programs): How successful is the current program in terms of outcome measures like student satisfaction, employment and further education? Judgments about quality are contextual in that they are dependent on the kind of program being evaluated. For example, an advanced trades certificate in a trades program and a Ph.D. in a university degree program are likely to represent equivalent levels of qualification. Suggested Performance Measures: Net and Percentage Change in the Number of Graduates Over the Last Three Fiscal Years [data available] Student Satisfaction With Education (Surveys) [data available] Student Satisfaction With Quality of Instruction (Surveys) [data available] Student Satisfaction With Skills Development (Surveys) [data available] Student Satisfaction With Preparation for Further Studies (Surveys) [data available] Student Satisfaction With Preparation for Employment (Surveys) [data available] Program Quality Performance Measures [to be developed] VIU Framework for Summative Assessment of Programs Revised as per Senate’s Feb 2012 recommendations Page 3 of 6 Facilities Performance Measures [to be developed] Faculty Qualification Performance Measures [to be developed] FINANCIAL PERFORMANCE The principal consideration is whether the program’s costs and revenues are reasonable and sustainable (or achievable). Related factors may include historical performance, program-specific financial characteristics, and a range of institutional mechanical factors (e.g., blockages that may have impeded student entry into or progress through a program). There may be historical funding issues related to programs for which specific funding FTEs may have at one time been identified, and there are accountability issues related to programs that are currently linked to targeted government funding. For proposed new programs, projected costs and revenues should be accurate and appropriate to the anticipated outcomes. Programs should be evaluated on their direct and indirect operating costs, and, where appropriate, capital costs. The main questions to be addressed are: What is the present financial performance record (or what is the projected financial performance and what evidence is given to support the projection?) What is the historical financial performance record? What is the anticipated financial performance and what evidence is there to support the projection? Are the costs justified by the outcomes of the program? Are the costs justified by some other standard? There are two relative aspects of program operating costs. One is the cost of the program compared to the costs of similar programs of comparable quality here or elsewhere, with the preference being for programs that are less expensive in per-student terms than similar programs elsewhere. The second is the difference between the direct operating costs and the revenue and the institutional (or government) funds supporting the program. In some cases it may be possible to identify a level of government funding attached to a program (typically stated as a per-student FTE value). Such information may or may not be relevant to the assessment of the program’s performance. In short, programs that operate relatively economically are preferred because they leave more money for general services and other programs. Both aspects of cost are important. The ideal program has a relatively low cost in relation to its delivery of student FTEs and operates more economically than similar programs elsewhere. A program that is more expensive than similar programs and has a relatively high cost in relation to its delivery of student FTEs would not earn a favorable ranking under this cost criterion. Actual net costs for each program are difficult to determine. One reason is that the funding for most of the student and academic support services is not tied to specific programs. Another reason is that funding for direct instruction, along with support of that instruction, is assigned to departments rather than to programs. Academic departments may have primary responsibility for one or more programs or none at all; they may provide a few or many service courses in support of programs largely operated out of other departments. To some extent, departmental costs can be used as a proxy for program costs, but care must be taken in the interpretation of these numbers. VIU Framework for Summative Assessment of Programs Revised as per Senate’s Feb 2012 recommendations Page 4 of 6 While the above questions are central to any assessment of financial performance, less central but relevant evidence on costs may also come from answers to questions about peripheral matters, such as: What impact, if any, does the program have on the cost-efficiency of other programs? Is the on-going demand for general support services higher or lower than the typical pattern for VIU programs? Suggested Performance Measures: Cost Per FTE Domestic Student [data available] Delivery Against Capacity [data available] ACCESS A program provides “access” to the extent that it serves those who otherwise have the fewest opportunities for post-secondary education. A program emphasizing access is designed to accommodate those who are less mobile and who may find it financially or socially impossible to study full time. Measures of access, in this sense, are: How much of the program will be rotated among campuses or scheduled in ways that make it easy for people to attend? Is the program offered in a way that allows part-time study? Are there alternate methods of fulfilling certain program requirements (e.g., PLA or transfer credit)? Are agreements in place to allow students to ladder out of or into related programs? How will the program recruit and support students from groups that historically have had lower participation rates in post-secondary education? In practice, many or most programs will answer the above questions in the affirmative. Indeed, for the most part Access reflects institutional processes for which few or no programs are in a position to claim distinction. Claims made in this respect may be even offset by those made in respect of Quality (for example, when open access to programs and courses puts a strain on the quality of instruction). Thus the criterion is the most difficult of the six to apply in a meaningful way, and this may be reflected in the poor availability of statistical information. It does not mean that the criterion has no value. Indeed, the criterion may serve to attach importance to programs focused on access, such as developmental education programs. Other aspects of this criterion may be better reflected under other criteria (such as Quality [e.g., student success] or Strategic Priorities [e.g., the Ministry’s requirement for measuring Aboriginal student participation]). Suggested Performance Measures: Net and Percentage Change in FTE Delivery of Aboriginal Students Over the Last Three Fiscal Years [data available] Net and Percentage Change in the Number of Aboriginal Students Over the Last Three Fiscal Years [data available] Performance Measures for Students With Disabilities or From Otherwise Disadvantaged Backgrounds [some data available; to be further developed] Number of Students Receiving Financial Assistance Over the Last Three Fiscal Years [data available] VIU Framework for Summative Assessment of Programs Revised as per Senate’s Feb 2012 recommendations Page 5 of 6 Performance Measures identifying any of the following as relevant: o Online o Part-time o Regional Delivery o PLA o Transfer/ladder/articulation agreements INSTITUTIONAL PRIORITIES This criterion provides the opportunity to judge a program in accordance with institutional goals and objectives. From time to time these “strategic priorities” may be given more specific definition by Senate and the Board. Questions relevant for judgments about this criterion may be taken from institutional and Ministry planning documents. Some questions that might be considered are: How does this program affect the balance of programming in the institution? How does the program contribute to identified departmental, faculty, institutional and Ministry goals and priorities? Are there one-time-only opportunities that could be lost if the program is not maintained (or not implemented) (e.g., special funding, partnerships, facilities)? How does this program contribute to the success or viability of other VIU programs? How else does the program advance the goals and objectives of the institution in ways not assessed by the previous criteria? There are a number of other factors touching on this criterion. Tacit institutional priorities may not be reflected (or may only be partially reflected) in the university’s planning documents. As an illustration of this fact, the matter of the balance of programming is more complex than the above question implies. One aspect of the subject is the balance of kinds of programs (degree, diploma, certificate, graduate degree) in contexts in which the university may have unstated reasons for preferring one kind of program over another (degree programs, for example, commit students to four years of study and their relative benefits are thus greater from a student retention point of view). Suggested Performance Measures: Net and Percentage Change in FTE Delivery of International Students Over the Last Three Fiscal Years [data available] Net and Percentage Change in the Number of International Students Over the Last Three Fiscal Years [data available] Other Measures Growing Out of the Integrated Planning Process [to be developed] VIU Framework for Summative Assessment of Programs Revised as per Senate’s Feb 2012 recommendations Page 6 of 6
© Copyright 2026 Paperzz