Paper on the SES developmental process with the theoretical and conceptual underpinnings described

Charles Sturt University
OES Review
Working Party
Report
Working Party Members
v
v
v
v
v
v
v
v
v
v
v
Som Naidu (DLTS/QEES, Chair)
Derek Sequeira (DLTS, Evaluation Services)
Mike Keppell (FLI Representative)
Edwina Adams (EFPI Representative)
Alan Bain (Sub-Dean L&T Representative, Faculty of
Education)
Andrea Crampton (Sub-Dean L&T, Faculty of Science)
Kay Plummer (Sub-Dean L&T, Faculty of Business)
Jenny Kent (Former Sub-Dean L&T, Faculty of Business)
Joy Wallace (Associate Dean, Faculty of Arts)
Dianne McGrath (Former Sub-Dean L&T Faculty of
Business)
Jenni Munday (Representative of Sub-Dean L&T, Faculty
of Education)
OES Review Working Party Report
Table of Contents
Executive summary
3
Evaluation of tertiary teaching
4
Reconceptualizing the evaluation of tertiary teaching
4
Scope and terms of reference
5
Development of the revised Survey
9
SES Implementation
12
Working party members
12
Project Team
12
Key Stakeholders
12
Steering Committee
12
Implementation timeline
12
References
13
Table 1. Construct, dimensions and principles
14
Subject Experience Survey
18
Table 2. General Core
18
Table 3. Workplace Learning Core
20
OES Review Working Party Report/10/10/2011/Page…2
Executive summary
In July 2010, Academic Senate initiated a review of the Online Evaluation of Subjects
(OES) and its administration. The DVC (Academic) set up a Working Party to prepare a
report with recommendations for Academic Senate and the Curriculum, Learning and
Teaching Committee to consider.
The DVC (Academic) and the Deans asked that this review also consider introducing core
items in the revised survey that are linked to the expectations of academic staff in
teaching at CSU which are as follows: 1) currency; 2) responsiveness to students; 3)
compliance with timelines; 4) assessment; and 5) subject-course linkages.
This report of the Working Party proposes the adoption of a new Subject Experience
Survey (SES) to replace the core items of the Online Evaluation of Subjects (see Tables
2/3 attached). The proposed SES comprises two sets of core items – one for Workplace
Learning Subjects and another for all other subjects. A Workplace Learning subject
comprises 75% or greater placement activity (Academic Senate 2010).
The questions in the proposed Subject Experience Survey represent a fundamental shift
in focus from a general measure of student satisfaction with teaching in a subject, to an
evaluation of the student learning experience consistent with the expectations of
academic staff in relation to teaching at CSU.
The two sets of core items and attendant recommendations around the implementation
of the surveys have also sought to address the problems that the Sub-Deans (L&T) and
their Schools have identified with gathering feedback from students on their subject
learning experience. These have comprised;
1.
2.
3.
4.
Low response rates to subject evaluation surveys;
Limited access to reports by key stakeholders in the Schools and Faculties;
Variable practices in the use of subject evaluations in performance management;
Poor design of some of the items in the OES including its inadequacy for seeking
feedback on important issues such as on workplace-based learning.
Recommendations
The Working Party seeks your endorsement of these two surveys for a pilot test and
subsequent adoption. For effective implementation of these surveys the Working Party
also seeks your support of the following attendant activities:
1. The term “Learning Outcomes” replaces “Objectives” in the Subject Outline and in
CASIMS (following discussion within the Faculty Learning and Teaching Committees).
2. Documentation of the development process of the Subject Experience Survey
including their validation and pilot tests.
3. Development of user guidelines for the surveys, including methods for gathering
additional and complementary data on the student learning experience, and
strategies for improving response rates.
4. Design and development of School level professional development activities around
the purpose and use of the Subject Experience Survey.
5. Development of a delivery system (and implementation strategy) with the capability
to generate reports from the surveys that include a wider range of data including
frequencies, means, standard deviations, correlations and longitudinal analyses.
OES Review Working Party Report/10/10/2011/Page…3
Evaluation of tertiary teaching
Our perceptions of tertiary teaching are a result of a number of factors. These comprise
our teaching actions, and intentions and beliefs about both, learning and teaching, the
nature of the subject matter and the discipline culture (see Pratt, 1992). Teaching
actions are commonly known instructional activities such as lecturing, facilitating
discussions, demonstrating and tutoring. In distance education contexts, these activities
comprise the design and development of learning resources, grading of assignments and
providing feedback to students.
Teaching actions, on their own, are a superficial representation of one’s conception of
teaching, yet it is these teaching actions, rather than one’s intentions and beliefs about
teaching that attracts the most attention and is often the target of evaluations of the
effectiveness of teaching (see Pratt, 1997). The result is a very superficial evaluation of
teaching which has a predominant focus on rudimentary teaching tasks and duties such
as the effectiveness of lecturing and grading of student work.
In a consideration of evaluations of teaching in higher education Dan Pratt and his
collaborators have argued that if teaching evaluations are to be rigorous and credible,
they must focus their attention on the essential and substantive aspects of teaching,
rather than basic teaching tasks and actions (see Pratt, 1997). The work of this group of
researchers suggests that unless we aim to understand what a tertiary teacher is trying
to accomplish (i.e., their teaching intentions), and why they think it is justified (i.e., their
beliefs about teaching), we will be likely to misunderstand their teaching actions.
In our own continuous quality improvement initiatives at CSU (such as in our FULT
program), we spend a great deal of time talking up the need for our tertiary teachers to
think about their teaching conceptions or philosophies. There is not much point in
pursuing with this if we are not going to direct our evaluations of teaching at teachers
and their intentions and beliefs about teaching.
Moreover, until such time as the evaluation of teaching is focused on the substantive
aspects of teaching, teaching will not be considered a legitimate scholarly activity. To be
rigorous in the evaluation of teaching requires a fundamental change in our approach to
one that shifts the focus from surface features to deeper structures of teaching.
Reconceptualizing the evaluation of tertiary teaching
Most tertiary teachers, unlike elementary or secondary school teachers, do not consider
themselves as teachers and many will not have had any training in teaching. They would
have been hired for their professional expertise and knowledge about their subject
matter rather than their teaching. Many would rather be seen as professionals or
researchers rather than as teachers.
If teaching evaluations are to be credible, they must also acknowledge the professional
knowledge of teachers and its discipline-base (Pratt, 1997). Without a focus on these
substantive aspects of teaching, evaluations are likely to miss that bit about teaching
upon which professionals in particular have built their identity. Moreover, when the
target of teaching evaluations moves away from its discipline-base, it decreases the
likelihood of teaching being recognized as a scholarly activity within the reward structure
of the University System (see Pratt, 1997).
A more rigorous and useful approach to the evaluation of teaching must include a focus
on more than one’s teaching actions. In order to achieve this focus, evaluations of
teaching need to focus on its design, efficiency, effectiveness and its capacity to engage.
OES Review Working Party Report/10/10/2011/Page…4
1. Design comprises a consideration of the underlying intentions and beliefs about
teaching (Pratt, 1997). It is possible that this kind of close scrutiny of teaching may
challenge assumptions about academic freedom as most academics tend to close the
door on their teaching while being quite ready to submit their research to public
scrutiny. Teaching is in fact a very public act and especially so in the case of distance
and online learning with the development of courseware and study materials that are
widely available. If teaching is to count in academe then it must be put to the same
level of rigorous scrutiny to which other forms of scholarship are subjected.
2. Efficiency is about the execution of the designed learning and teaching experiences
and this can be adequately assessed through surveys and direct observation by peers
with the use of private or public criteria.
3. Effectiveness is concerned with the impacts and results of teaching on students and
their learning experiences. These can be ascertained through surveys and
examination of the products of learning such as assignments, project reports and
portfolios etc. If student evaluations are to be credible, the data must reflect the
impacts of teaching on learning outcomes (Pratt, 1997).
4. Capacity to engage is determined by the teacher’s ability to articulate clear learning
outcomes, design challenging learning experiences for their achievement, and identify
and collect suitable learning resources which altogether have the potential to motivate
learners and promote engagement with the learning experience.
Scope and terms of reference
1. Why seek feedback on teaching?
There are several reasons for obtaining feedback on teaching. These include its use in
assessing and improving teaching quality, providing evidence on teaching to support
probation, promotions etc., and in some cases, helping students and other stakeholders
in selecting courses and subjects for study.
Activity 1: Review the reasons for seeking feedback on teaching and ensure that
appropriate and adequate processes are in place to make these clear to all staff.
Implications
a. The reasons for seeking feedback on teaching are not sufficiently clear to all.
b. All stakeholders need to get the same message about why we seek this feedback.
c. These reasons should be consistent with University agenda and communicated to
staff and students clearly and through easily accessible processes.
2. What should be the target of this feedback?
The target of feedback on teaching should be driven by the purpose of the feedback
(i.e., why are we seeking the feedback?). If the purpose of the feedback is to improve
the quality of teaching and learning, then the teaching and learning process and the
underpinnings of that process should be the focus of the feedback. If the course
curriculum is the focus then that should be the subject of the review. If the courseware
materials and resources are the focus, then that should be the subject of the evaluation.
Activity 2: Current strategies and instruments for gathering feedback on teaching should
be re-examined to ensure that their coverage is comprehensive and aligned with the
purpose of feedback.
Implications
a. Are the 11 core items adequately focused on this target?
b. Subjects for which the current 11 core items are inappropriate need to be
identified and other core items drawn up to meet their needs.
c. The categorized item bank of optional items needs to be examined to ensure that
it is comprehensive.
OES Review Working Party Report/10/10/2011/Page…5
d. Should there be a separate set of core items for Distance Education and CampusBased teaching?
3. What should be the measure of this feedback?
Most often feedback on teaching and especially that from students is focused on their
satisfaction with teaching and their perceptions of its efficiency and effectiveness even
though there are obvious problems with the use of satisfaction alone as a measure of
teaching quality (see, Richardson, 2005).
The focus of feedback on teaching should be clearly aligned with the purpose of the
feedback. Moreover, it must include coverage of teaching actions and processes, but also
the more substantive aspects of teaching, such as teachers’ intentions and their beliefs
about student learning and also the learning achievement.
Activity 3: Current strategies and instruments for gathering feedback on teaching should
be re-examined to ensure that their coverage is comprehensive and focused on teaching
actions as well as the more substantive aspects of teaching (i.e., its design (teachers’
intentions and their beliefs about learning and teaching), and its efficiency and
effectiveness.
Implications
a. The OES should comprise items that seek student feedback on the more
substantive aspects of teaching, and if found inadequate, other strategies ought
to be explored that can gather this kind of feedback.
b. What kind of statistic on the OES should be reported?
4. How to collect feedback on teaching?
Feedback on teaching can be obtained from a variety of sources and tied closely to the
purpose of the feedback. These include both students and one’s peers. Feedback can be
collected with a wide range of instruments, and there are many good reasons for using a
combination of summative and formative strategies in order to obtain both quantitative
and qualitative data.
a)
b)
Summative strategies such as the use of surveys using Likert scales are useful for
gathering feedback from large groups of respondents easily, quickly and
systematically. Quantitative information gathered through such processes can be
easily analyzed, reported and examined for trends over time, subjects and courses.
Such surveys can also provide opportunities for seeking open-ended responses as
well, although these are harder to analyze and report.
Formative strategies may include the use of such tools as the Harvard one-minute
surveys and focus groups (with on-campus students), critical reflections (in student
portfolios), direct observations and peer reviews. While data from these instruments
are harder to collate and report, they enable the gathering of information that may
reveal deeper insights into the impacts of teaching.
Activity 4: Current strategies for obtaining feedback should be re-examined to ensure
that a range of strategies instruments are being used to obtain both quantitative and
qualitative data formatively and summatively.
Implications
a. Is the OES adequate for obtaining feedback on teaching?
b. What other strategies can be deployed widely across the University?
c. Should all subjects be evaluated using the OES every Semester?
5. When should we collect feedback on teaching?
Feedback on teaching from students should be collected at various points and not only at
the end of the teaching session. These data gathering points should include the
OES Review Working Party Report/10/10/2011/Page…6
beginning, during, at the end and sometime after the end of the learning and teaching
session. The idea of obtaining this kind of data even before any teaching may have
begun is to ascertain and benchmark the expectations of students in relation to learning
and teaching.
Data gathered during the session will help to ascertain how both, the learners and
teachers are meeting those expectations, so that something could be done about any
shortfalls or mismatches. Data gathered at the end of the session would of course help in
the next iteration of the subject. Finally, data gathered sometime after the session
(assuming that those students can be located) will be useful for insights on the long
term impacts (if any) of their learning and teaching experiences, as it may take some
time for such experiences to reveal any impacts.
Activity 5: Current processes and strategies for obtaining feedback on teaching should
be re-examined to ensure that appropriate and adequate processes are in place for
gathering feedback strategically (i.e., before, during, at the end, and sometime after the
end) of the teaching session.
Implications
a. Is the OES adequate for gathering this kind of data from students?
b. If feedback from students is to be gathered at various points in their strategy,
how will this work?
What would the implications of this for teachers and
students?
6. Why is it crucial to have high response rates?
Voluntary participation in the provision of feedback from students suffers from a number
of problems. Foremost among these is that response rates are often low. Furthermore,
those who choose to provide feedback are likely to be quite different from those who
elect not to do so in their disposition to learning and teaching as well as the provision of
feedback. It is arguable that students who abstain from giving feedback may in fact be
suggesting that they do not care very much about their learning and teaching experience
(see also Watkins, & Hattie, 1985; Neilsen, Moos, & Lee, 1978).
Therefore it would be reasonable to assume that those who respond to surveys are
demographically different, such as in their attitudes and behavior, from those who
choose not to participate in such activities (see Goyder, 1987). These kinds of problems,
in the absence of random sampling, can be mitigated by increased response rates
approaching at least 70 to 80% (AVCC, 2001), even though a 30 to 50% response rate
from voluntary participation in postal or online surveys is regarded as the norm.
Response rates can be improved by gathering feedback immediately after the end of the
teaching session and before the grading of the final assessment activity. However, this
may amount to a degree of coercion and can be therefore somewhat unethical. This can
be mitigated by announcing that participation is voluntary, anonymous and confidential
and participants can withdraw from the process at any time with impunity.
A variety of gathering data tools should also be used to obtain such data, not only to
meet the needs of a range of learning and teaching contexts (online and F2F), but to
also compensate for any disabilities that students might have.
Activity 6: Current processes and strategies for obtaining student feedback on teaching
should be re-examined to ensure that appropriate and adequate processes are in place
for a high response rate approaching 70-80%, at least in on-campus settings.
Implications
a. What can be done to increase student participation in the process?
OES Review Working Party Report/10/10/2011/Page…7
b. Are alternative methods of administration (paper-based surveys)
considering?
c. What are the implications of using incentives for providing feedback?
worth
7. How to make use of feedback from students?
Feedback from students is very important to assuring a high quality of educational
experience. However, how seriously students, teachers and the organization regard this
kind of feedback depends on a number of factors (see Richardson, 2005).
a)
b)
c)
Need for clarity in the interpretation of feedback. Stakeholders need to recognize
that student feedback on teaching is merely their perceptions, and that no causal
inferences can be drawn on its impacts on teaching. Therefore care should be
exercised with how this data is interpreted and used for resource allocation, in
improving teaching and making judgments about teaching effectiveness (see
Abrami, 2001; Richardson 2005).
Incentives and requirements to act on feedback. Without clear guidelines on the
implications of good or poor feedback and what use should be made of student
feedback, the imperatives for teachers to collect feedback from students is lacking
(Kember, Leung, & Kwan, 2002). Students also are not inclined to pay attention to
providing feedback if they cannot see that their feedback is leading to explicit and
concrete changes to teaching and their courses (Spencer & Schmelkin, 2002).
Ownership of student feedback. Teachers and students are likely to pay attention to
teaching evaluations if these are seen as an integral part of the teaching and
learning process, rather than an optional activity that is detached from the
immediate context of teaching (Richardson, 2005). The systems and processes for
gathering student feedback can be administered centrally, but the responsibility for
obtaining and utilizing all feedback on teaching should rest squarely with the
academic units where the teaching and learning is taking place, and they must own
this feedback as well.
Activity 7: Current policies and practices on the use of student feedback on teaching
should be re-examined to ensure that there is clarity and consensus around:
a) The interpretation and use of student feedback on teaching;
b) The imperatives for teachers to obtain student feedback on teaching;
c) Student’s understanding of the process and the role of their feedback; and
d) The ownership of feedback by the Schools and its teaching staff.
Implications
a. Who owns the feedback?
b. Why is it important for teachers to obtain feedback on their teaching?
c. Do students understand why they are asked to provide feedback on teaching?
d. Do Schools (especially its teaching staff) feel they own feedback on their
teaching?
e. What are the limitations of survey data? Does everybody know how feedback on
teaching is used? How can survey data be supplemented with other sources of
feedback on teaching?
f. Reports from Heads of Schools on subject evaluations should be more rigorously
enforced to focus on problems and their prevalence, possible causes, solution
actions, and resources implications.
g. Consideration should be given to making Heads of School reports more widely
accessible (such as on the Learning and Teaching website as well as the
Evaluation website so that students are able to see that Schools are acting on
their feedback).
8. Do workplace learning subjects need a different set of core items?
There are fundamental differences between the learning and teaching environments
in the workplace and those organized at the university. Work placement requires the
OES Review Working Party Report/10/10/2011/Page…8
student to apply theory to practice in a real-world setting and work within the
workplace culture/norms. The learning and teaching opportunities in the workplace
are dependent on the context and therefore variable. Furthermore, the design of
workplace learning subjects is variable at CSU. For example, Subject 1 - students
spend six weeks in the workplace and attend half day briefing and de-briefing oncampus classes, Subject 2 (hybrid) –students spend one week in the workplace and
attend eight weeks of on-campus classes. Clearly, Subject 2 could be more
adequately evaluated by general core items in the OES, whereas Subject 1 would
require a different set of items more targeted at their workplace learning experience.
A 2009 Working Party of the University (Curriculum) Learning and Teaching
Committee set up to investigate practices in the collection and use of data on
students’ experience of their fieldwork, identified that the mandatory 11 core items in
the current OES are “inadequate for evaluating workplace learning (previously
fieldwork) subjects with substantive amounts of work placement”. As students
attending work placements encounter a very different learning and teaching
environment than in a conventional university settings, this Working Party
recommended that a set of core items be developed to adequately evaluate
workplace learning subjects.
In 2010 the Education for Practice Institute (EFPI) commenced work on developing
this set of core workplace learning items. The Academic Senate (1 December 2010
point 10.1) approved a WPL subject, for coding in CASIMS and OES as follows
• The subject must be completed for academic credit.
• The subject must have a work placement making up greater than 75% of the
learning activities.
• Work placement is where students engage in real-world, not simulated, work
activities.
• The work placement must occur in a real-world workplace or a CSU operated
clinic, farm, radio station, winery etc. That is, the work placement can be owned
or operated by industry/professional partners or by the University.
This CASIMS definition does not include:
• Workplace experience that is not for credit towards the completion of the course.
• Compulsory work experience that is outside of a formal subject in which students
enrol.
• Hybrid subjects where there is workplace experience AND distance or on campus
components of the subject.
Activity 8: A set of core workplace learning items be developed to adequately evaluate
this unique teaching and learning experience. Workplace learning subjects for the
purpose of subject evaluation should be classified as per the criteria approved by
Academic Senate in the 1 December 2010 Minutes point 10.1.
Implications
a) Workplace-based learning subjects should be clearly identified.
b) A separate set of mandatory times should be developed (with equal rigor) for
gathering feedback from WPL subjects.
Development of the revised Survey
Consideration of the foregoing issues led to the development of two sets of core items
for the revised survey -- a set of General Core items and another for Workplace-based
subjects. It comprised the following phases:
•
Phase 1: A review of the literature; survey of practices at other institutions; and
gathering of feedback on the OES from all of the Schools at CSU.
OES Review Working Party Report/10/10/2011/Page…9
•
•
Phase 2: Development of key constructs and drivers; identification of key CSU L&T
values, principles and guidelines; and drafting of survey items based on the literature
on best practices, CSU L&T values, principles and guidelines (see Table 1).
Phase 3: Assessing construct validity (Trochim, 2006), including translation validity
(review by Working Group of key constructs, CSU L&T values, principles and
guidelines and survey items for congruency, face and content validity, and
consideration of the response format); and criterion-referenced validity (pilot testing
and interviews in order to ascertain how the sample group answered the items and
their perceptions of the clarity of each item, McDowell, 2006).
Values, constructs and factors
The Working Party identified a list of key values that would shape the interpretation of
the constructs and reflect the review of the literature and the recommendations. The
SES should make a meaningful, yet not singular contribution to: Student reflection;
Teacher feedback; Improvement of the learning experience; Informing University
processes; Measuring progress and change*; Meeting standards; Determining progress
toward faculty member goals. *Subject to a determination of the instrument’s sensitivity
to measure change in practice.
The key construct to be measured by the revised surveys was identified as “the student
perceptions of their learning experience in a CSU subject”. This construct reflects the
needs and drivers identified in the recommendations and the direction provided by the
extant literature and its focus on modifiable processes in the design and implementation
of the learning experience. The key emphases underpinning the construct are as follows:
— The CSU subject is the context and object of the survey.
— The focus of the evaluation is the features of a CSU subject that contribute to the
learning experience.
— Student perceptions of the experience constitute the target responses.
Questions on the surveys seek information about those things to which CSU has
assigned value in relation to the students’ learning experience (hence the suggested
name the Subject Experience Survey--SES). They do not preclude asking questions that
pertain to the way a subject fits into a course.
The Working Party employed the literature and recommendations to identify five key
factors that comprise the dimensions of the key construct. These are as follows:
•
Design: pertains to student perceptions of the design underpinnings of the learning
experience. This comprises alignment of learning outcomes, design of the learning
experiences, assessment and feedback.
•
Context: comprises alignment of the designed learning experiences with a relevant
and meaningful context (the profession and the workplace within which learners will
be working upon graduation).
•
Content: includes its currency, scope and depth as perceived by students and peers.
•
Implementation: refers to the way the learning experience is enacted as perceived
by students. includes the execution of the design of the curriculum — the way
instruction, assessment and feedback are implemented, the teaching approach and
the responsiveness of the instructor.
•
Students: pertains to student assumptions, perceptions and expectations about the
learning experience. This includes prior learning, reason for enrolment, level of
interest and engagement with the learning experience.
These factors provided the scope to generate student responses pertaining to the design,
efficiency, effectiveness and capacity dimensions described by Pratt (1997).
Response format
The Working Party identified four issues/questions described in the assessment literature
pertaining to Likert-type devices that pertain to the SES. They are:
OES Review Working Party Report/10/10/2011/Page…10
•
•
•
The meaning assigned to and scoring of the midpoint in Likert type scales (an issue
that is especially problematic in the current 7 point scale).
The type of scoring -- which format(s) provide the most reliable responses?
Method variance -- the relative impacts of using the same or different response
formats in an evaluation device.
In building a response to these issues, the Working Party decided to use a five-point
scale with a non-neutral mid point (i.e., the mid-point on the scale should not confuse or
conflate uncertainty about a response with the assignment of a score value on a
continuous scale).
Moreover, the Working Party agreed that the use of multiple 5-point formats was
appropriate to address issues of method variance and agreed to employ formats that
focused on the extent/presence and frequency of occurrence of construct factors over
global judgments or opinions of satisfaction.
Construct (translation and criterion-referenced validation)
A draft set of times following these guidelines were developed for consideration by the
Working Party members (the Expert Group) at a face-to-face meeting spanning two-half
days. This work comprised the expert group examining the completeness of the key
constructs and draft survey items matrix (see Table 1 below) to consider its accuracy
and adequacy. It also examined the draft survey items for translation validity (which
comprised the expert group reviewing the draft question items against the key
constructs, CSU L&T values their dimensions, the learning and teaching principles and
guidelines for congruency, face and content validity.
The next step involved pilot testing the two sets of core items. The General Core was
sent out to 8, 034 students (responses received 1180 [15%]. The WPL set was sent out
707 students (responses received 113 [16%]. Students were asked to respond to the
surveys in relation to an identified subject they had studied in the preceding semester.
Participants were invited to make comments on the clarity of the survey items. The idea
was to ascertain how the items were understood and interpreted by respondents.
They were also invited to participate in a phone interview about their perceptions of the
items in the new instrument. 361 students responding to the General Core and 36
students responding to the WPL Core items agreed to be interviewed on the phone about
their responses to the question items and their responses to them.
The following research questions guided the pilot test of the surveys.
• What did students think each item was asking about?
• Is there a difference in responses on compulsory and elective subjects?
• Is there a difference in responses of internal and distance education students?
• Is there any variation in responses based on student’s grade expectations?
• Is there a difference in responses based on the time students spent studying?
The pilot study sample was drawn from all faculties and comprised on and off-campus
students from across the University. Evaluation Services administered this survey using
SurveyMonkey, quite apart from the regular OES, in July at the start of Term 2, 2011,
well after the window for completing the current OES Survey by students was closed.
Participants in the study sample were sent out a link to the online survey via a personal
email to their email ID and asked to complete the survey. They were assured of the
anonymity of their responses. Phone interviews were carried out with a selected number
of volunteering students.
Review and finalization
OES Review Working Party Report/10/10/2011/Page…11
Data derived from the pilot test of the draft surveys was reviewed, and improvements
made to the items in the instruments, at a second face-to-face meeting of the Working
party in October, 2011. See enclosed the revised instruments.
SES implementation
Membership of teams engaged in this work is as follows:
Working Party Members
• Som Naidu (DLTS/QEES, Chair)
• Derek Sequeira (DLTS, Evaluation Services)
• Mike Keppell (FLI Representative)
• Edwina Adams (EFPI Representative)
• Alan Bain (Sub-Dean L&T Representative, Faculty of Education)
• Andrea Crampton (Sub-Dean L&T, Faculty of Science)
• Kay Plummer (Sub-Dean L&T, Faculty of Business)
• Jenny Kent (Former Sub-Dean L&T, Faculty of Business)
• Joy Wallace (Associate Dean, Faculty of Arts)
• Dianne McGrath (Former Sub-Dean L&T Faculty of Business)
• Jenni Munday (Representative of Sub-Dean L&T, Faculty of Education)
Project Team
• Project Sponsor, Marian Tulloch (Chair)
• Project Leader, Som Naidu (Alternative chair)
• Project Manager, TBD
• Business Analyst, TBD
• Enterprise Architect, Paul Bristow
• Business Expert, Derek Sequeira/Caroline Rose
Key Stakeholders
• SEC, Deans, Heads of Schools, Course Directors, Course Coordinators, SubDeans, CLT, Subject Coordinators
Steering Committee
• Marian Tulloch (LTS)
• Som Naidu (OES Working Party, LTS)
• Derek Sequeira (OES Working Party, LTS)
• Alan Bain (OES Working Party)
• Edwina Adams (OES Working Party and EFPI)
• Mike Keppell (OES Working Party, FLI)
• Di Ireland (DIT Representative)
• Nominee from Division of Student Administration
Implementation timeline
Activity
1. Literature Review
2. Survey of Practices outside CSU
3. Feedback from CSU Schools
4. Progress report to the L&T
Committee
5. Progress report to Academic Senate
6. DIT to schedule implementation
7. Development of draft core items
8. Mapping of key theoretical
constructs
Start date
July, 2010
July, 2010
July, 2010
July, 2010
Delivery date
September, 2010
September, 2010
August, 2010
September, 2010
July, 2010
February, 2011
January, 2011
February, 2011
September, 2010
December, 2011
February/March, 2011
April/May, 2011
OES Review Working Party Report/10/10/2011/Page…12
9.
10.
11.
12.
13.
14.
15.
Seek ethics approval from the
University ethics committee
Pilot test: Reliability and validity
testing
Progress report to the L&T
Committee
Progress report to Academic Senate
Pilot implementation of revised OES
Questionnaire
Review of pilot implementation of
the SES
Professional development around
the revised Subject Evaluation
Survey
March, 2011
April, 2011
July, 2011
October, 2011
October, 2011
October, 2011
November, 2011
Session 3, 2011
Session 1, 2012
November, 2011
End of Session 3, 2012
(February ’12)
Session 1, 2012
Session 1, 2012
Session 2, 2012
References
Abrami, P. C. (2001). Improving judgements about teaching effectiveness using teacher
rating forms. New Directions for Institutional Research, No. 109, Spring 2001:
Jossey-Bass. A publishing Unit of John Wiley & Sons, Inc.
Australian Vice-Chancellor’s Committee & Graduate Careers Council of Australia (2001)
Code of practice on the public disclosure of data from the Graduate Careers Council
of Australia’s graduate destination survey, Course Experience Questionnaire and
postgraduate research experience questionnaire (Canberra, Australian ViceChancellor’s Committee). Available online at: www.graduatecareers.com.au/
(accessed August 5, 2010).
Goyder, J. (1987). The silent minority: Non-respondents on sample surveys. Cambridge:
Polity Press.
Kember, D., Leung, D. Y. P., & Kwan, K. P. (2002). Does the use of student feedback
questionnaires improve the overall quality of teaching? Assessment and Evaluation
in Higher Education, 27, 411–425.
McDowell, I. (2006). Measuring Health: A Guide to Rating Scales and Questionnaires.
Oxford University Press.
Neilsen, H. D., Moos, R. H., & Lee, E. A. (1978). Response bias in follow-up studies of
college students, Research in Higher Education, 9, 97-113.
Pratt, D. D. (1992). Conceptions of teaching, Adult Education Quarterly, 42(4), 203-220.
Pratt, D. D. (1997). Reconceptualizing the evaluation of teaching in higher education,
Higher Education, 34, 23-44.
Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the
literature, Assessment and Evaluation in Higher Education, 30(4), 387-415.
Spencer, K. J. & Schmelkin, L. P. (2002) Student perspectives on teaching and its
evaluation, Assessment and Evaluation in Higher Education, 27, 397–409.Watkins,
D., & Hattie, J. (1985). A longitudinal study of the approaches to learning of
Australian tertiary students, Human Learning, 4, 127-141.
Trochim, W. M. (2006). Research Methods Knowledge Base (2nd ed.). Retrieved from
http://www.socialresearchmethods.net/kb/
Watkins, D. & Hattie, J. (1985) A longitudinal study of the approaches to learning of
Australian tertiary students, Human Learning, 4, 127–141.
OES Review Working Party Report/10/10/2011/Page…13