Assessment - Center for Excellence in Teaching and Learning

Assessment
Palestine Polytechnic University
TAM visit-First day
Chris Rust
Oxford Centre for Staff and Learning Development
Oxford Brookes University, UK
Student learning and
assessment
“Assessment is at the heart of the student
experience”
(Brown, S & Knight, P., 1994)
“From our students’ point of view, assessment
always defines the actual curriculum”
(Ramsden, P.,1992)
“Assessment defines what students regard as
important, how they spend their time and how
they come to see themselves as students and
then as graduates.... If you want to change
student learning then change the methods of
assessment”
(Brown, G et al, 1997)
Cartoon by Bob
Pomfret, copyright
Oxford Brookes
University
Influence on student
learning
Assessment influences both:
 Cognitive aspects - what and how
 Operant aspects - when and how much
(Cohen-Schotanus, 1999)
Issues in assessment
Reliability
Laming (1990)
Very poor agreement between blind double marking
of exam scripts at ‘a certain university’. The only
encouraging thing that can be said about these
correlations is that they are all positive.
Newstead & Dennis (1994)
Huge differences between both internal and external
examiners’ marks, with externals more random than
internals
Student E
Externals
Internals
Lowest
52
47
Student F
Externals
Internals
50
55
Highest
80
68
85
75
Reliability contd
Hartog & Rhodes (1935)
Experienced examiners, 45% marked differently
to the original.
When remarked, 43% gave a different mark
Hanlon et al (2004)
Careful and reasonable markers given the same
guidance and the same script could produce
differing grades;
Difference between marks of the same examiners
after a gap of time
Community of assessment
“Consistent assessment decisions among
assessors are the product of interactions
over time, the internalisation of exemplars,
and of inclusive networks. Written
instructions, mark schemes and criteria,
even when used with scrupulous care,
cannot substitute for these”
(HEFCE, 1997)
Task
In 3s:
Consider the the example assessment grid.
a) What do you think of the idea in principle? Could you
make use of the idea?
b) What do you think of this specific examples? How
would you need to adapt it?
c) Do you use anything like this already? If so, what is it
like and how well does it work?
Validity & Authenticity
Key features of effective assessment tasks:
 Valid – truly assess what they claim to assess
 Authentic – a ‘real’ world task, even better in a ‘real’
world setting (placements, live projects, etc.)
 Relevant – something which the student is personally
interested in, and wants to know more about or be
able to do
Both authenticity and relevance should make the
activity meaningful to the student, and therefore be
motivating (the antithesis of an ‘academic exercise’!)
Constructive alignment
- what is it?
“The fundamental principle of constructive
alignment is that a good teaching system
aligns teaching method and assessment to
the learning activities stated in the objectives
so that all aspects of this system are in accord
in supporting appropriate student learning”
(Biggs, 1999)
Constructive alignment:
3-stage course design
 What are “desired” outcomes?
 What teaching methods require
students to behave in ways that
are likely to achieve those
outcomes?
 What assessment tasks will tell
us if the actual outcomes match
those that are intended or
desired?
This is the essence of
‘constructive alignment’
(Biggs, 1999)
Learning
activity
Learning
outcomes
Learner
Assessment
Task
Individually:
1. Consider one of your courses and its learning outcomes.
2. Then consider whether the current teaching and
assessment methods are consistent with these outcomes.
Especially ask yourself, is whether each of the learning
outcomes has been achieved really being assessed?
In pairs:
Take it in turns to tell your partner your conclusions.
Where you have found discrepancies, the job of the
listener is to try and help their partner to find ways that you
might change the assessment methods to make them
more valid and aligned with your learning outcomes
Refer to handout
Purposes of assessment
Purposes of assessment
Why do we assess students?
How many different reasons can you
identify?
Purposes of assessment
(adapted from Brown G et al 1997)
 motivate students
 diagnose a student's strengths and weaknesses
 help students judge their own abilities
 provide a profile of what each student has learnt
 provide a profile of what the whole class has learnt
 grade or rank a student
 permit a student to proceed
 select for future courses
 license for practice
 select, or predict success, in future employment
 provide feedback on the effectiveness of the teaching
 evaluate the strengths and weaknesses of the course
 achieve/guarantee respectability and gain credit with
other institutions and employers
Purposes of assessment 2
1.
Motivation
2.
Create learning activities
3.
Providing feedback
4.
Judging performance (to produce marks, grades,
degree classifications; to differentiate; gatekeeping;
qualification)
5.
Quality assurance
1, 2 & 3 concern learning and perform a largely
formative function; should be fulfilled frequently
4 & 5 are largely summative functions;
need to be fulfilled infrequently but well
Summative and formative
assessment
Formative vs Summative
assessment
Formative: focus is to help the student learn
Summative: focus is to measure how much has been
learnt.
not necessarily mutually exclusive, but….
Summative assessment tends to:
come at the end of a period or unit of learning
focus on judging performance, grading, differentiating
between students, gatekeeping
be of limited or even no use for feedback
Problems of summative
assessment
Can:
 encourage surface/strategic approaches
 not value/build on/make use of prior learning and
experience and student ability
 encourage playing safe/avoid risk-taking
 not provide feedback (e.g. exams)
 be time consuming for staff/reduce overall amount of
assessment
Potential of formative
assessment
Feedback is the most powerful single influence that
makes a difference to student achievement
Hattie (1987) - in a comprehensive review of 87 metaanalyses of studies
Feedback has extraordinarily large and consistently
positive effects on learning compared with other
aspects of teaching or other interventions designed to
improve learning
Black and Wiliam (1998) - in a comprehensive review
of formative assessment
Formative assessment – where
& when?
(Chickering and Gamson, 1987)
Knowing what you know and don’t know focuses
learning. Students need appropriate feedback on
performance to benefit from courses.
 In getting started, students need help in assessing
existing knowledge and competence.
 In classes, students need frequent opportunities to
perform and receive suggestions for improvement.
 At various points during college, and at the end,
students need chances to reflect on what they have
learnt, what they still have to learn, and how to assess
themselves.
11 conditions under which
assessment supports learning 1
(Gibbs and Simpson, 2002)
1.Sufficient assessed tasks are provided for students
to capture sufficient study time (motivation)
2.These tasks are engaged with by students, orienting
them to allocate appropriate amounts of time and
effort to the most important aspects of the course
(motivation)
3.Tackling the assessed task engages students in
productive learning activity of an appropriate kind
(learning activity)
4.Assessment communicates clear and high
expectations (motivation)
11 conditions 2
5. Sufficient feedback is provided, both often enough and in
enough detail
6. The feedback focuses on students’ performance, on
their learning and on actions under the students’ control,
rather than on the students themselves and on their
characteristics
7. The feedback is timely in that it is received by students
while it still matters to them and in time for them to pay
attention to further learning or receive further assistance
8. Feedback is appropriate to the purpose of the
assignment and to its criteria for success
9. Feedback is appropriate, in relation to students’
understanding of what they are supposed to be doing
10. Feedback is received and attended to
11. Feedback is acted upon by the student
Good feedback practice:
(Nicol and Macfarlane-Dick 2006)
 helps clarify what good performance is (goals, criteria,
standards)
 facilitates the development of self-assessment and
reflection in learning
 delivers high quality information to students about their
learning
 encourages teacher and peer dialogue around learning
 encourages positive motivational beliefs and self esteem
 provides opportunities to close the gap between current
and desired performance
 provides information to teachers that can be used to help
shape teaching
Self and Peer Assessment
Involve the students – 1
Self assessment
Simple:
Strengths of this piece of work
it is the interaction
between both
believing in selfWeaknesses in this piece of work
responsibility and
using assessment
How this work could be improved formatively that leads
to greater
educational
The grade it deserves is…..
achievements
(Brown & Hirschfeld,
What I would like your comments on
2008)
More complex: see handout
Peer marking –
using model answers
(Forbes & Spence, 1991)
Scenario:
 Engineering students had weekly maths problem
sheets marked and problem classes
 Increased student numbers meant marking impossible
and problem classes big enough to hide in
 Students stopped doing problems
 Exam marks declined (Average 55%>45%)
Solution:
 Course requirement to complete 50 problem sheets
 Peer assessed at six lecture sessions but marks do
not count
 Exams and teaching unchanged
Outcome: Exam marks increased (Av. 45%>80%)
Peer feedback - Geography (Rust,
2001)
Scenario
 Geography students did two essays but no apparent
improvement from one to the other despite lots of tutor time
writing feedback
 Increased student numbers made tutor workload impossible
Solution:
 Only one essay but first draft required part way through course
 Students read and give each other feedback on their draft essays
 Students rewrite the essay in the light of the feedback
 In addition to the final draft, students also submit a summary of
how the 2nd draft has been altered from the1st in the light of the
feedback
Outcome: Much better essays
Peer feedback - Computing (Zeller,
2000*)
The Praktomat system allows students to read, review, and
assess each other’s programs in order to improve quality
and style. After a successful submission, the student can
retrieve and review a program of some fellow student
selected by Praktomat. After the review is complete, the
student may obtain reviews and re-submit improved
versions of his program. The reviewing process is
independent of grading; the risk of plagiarism is narrowed
by personalized assignments and automatic testing of
submitted programs
[*Available at:
http://www.infosun.fim.unipassau.de/st/papers/iticse2000/itic
se2000.pdf]
Peer feedback – Computing
cont’d (Zeller, 2000*)
In a survey, more than two thirds of the students
affirmed that reading each other’s programs improved
their program quality; this is also confirmed by
statistical data. An evaluation shows that program
readability improved significantly for students that had
written or received reviews.
Mechanise assessment
1.Statement banks
2.Assignment attachment sheets
3.Computer aided-assessment
Statement Banks
Write out frequently used feedback comments,
for example:
1. I like this sentence/section because it is clear and
concise
2. I found this paragraph/section/essay well organised
and easy to follow
3. I am afraid I am lost. This paragraph/section is
unclear and leaves me confused as to what you mean
4. I would understand and be more convinced if you gave
an example/quote/statistic to support this
5. It would really help if you presented this data in a table
6. This is an important point and you make it well
etc…….
Weekly CAA testing –
case study data
Student Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7
A
57
63
21
35
40
27
20
B
68
71
45
79
83
80
77
C
23
21
11
-
-
-
-
D
45
51
45
79
83
80
77
E
-
-
-
-
-
-
-
F
63
-
51
-
47
-
35
G
54
58
35
50
58
60
62
(Brown, Rust & Gibbs,1994)
CAA quizzes (Catley, 2004)
Scenario
 First term, first year compulsory law module
 A new subject for most (75%) students
 High failure rate (25%), poor general results (28% 3rd
class, 7% Ist)
Solution:
 Weekly optional VLE quizzes (50% take-up)
Outcome:
Quiz takers: 4% fail, 14% 3rd class, 24% Ist
Non-quiz takers: same pattern as before
Overall:
14% fail (approx half previous figure)
21% 3rd class
14% 1st (double previous figure)
Assessing a selection (Rust, 2001)
Scenario:
 Weekly lab reports submitted for marking
 Increased student numbers meant heavy staff
workload and increasingly lengthy gap before
returned so feedback of limited/no use
Solution:
 Weekly lab reports still submitted
 Sample number looked at, and generic feedback emailed to all students within 48 hours
 At end of semester, only three weeks’ lab reports
selected for summative marking
Outcome:
 Better lab reports and significantly less marking
Assessing groups
Benefits of cooperative
learning
Cooperation, compared with competitive and individualistic
efforts, tends to result in
 higher achievement
 greater long-term retention
 more frequent use of higher-level reasoning
 more accurate and creative problem-solving
 more willingness to take on and persist
with difficult tasks
 more intrinsic motivation
 transfer of learning from one situation to another
 greater time on task
(Johnson, Johnson and Smith 2007, p 19)
Problems with cooperative
learning








many students don’t like it
students may find group work assessment unfair
social loafing
free riding
lack of teamwork skills
group think, or avoiding conflict
lack of time to gel into an effective group
inappropriate group size and/or lack of sufficient
heterogeneity in the group
(Johnson & Johnson 1999)
Addressing the issues of
group assessment
Task
Skim read the paper by Gibbs and
discuss in groups of 3
Students must be brought
into the community of
assessment practice
 To improve student learning necessary that “the student comes to hold a concept
of quality roughly similar to that held by the teacher” (Sadler, 1989)
 Student beliefs about knowledge & knowing affect views on assessment &
feedback (O’Donovan, 2016)
 Passive receipt of feedback has little effect on future performance
(Fritz et al, 2000) Dialogue and participatory relationships are key elements of
engaging students with assessment feedback (ESwAF FDTL, 2007)
 It is not enough to make it a better monologue; feedback must be seen as a
dialogue (Nicol, 2009)
 “participation, as a way of learning, enables the student to both absorb, and be
absorbed in the culture of practice” (Elwood & Klenowski, 2002, p. 246)
 The most significant factor in student academic success is student involvement
fostered by student/staff interactions and student/student interactions (Astin,
1997)
 The only common factor in study of departments deemed excellent in both
research & learning & teaching is high levels of student involvement (Gibbs,
2007)
Rust C., O’Donovan
B & Price., M (2005)
Social-constructivist assessment process model
Assessment literacy
For students to reach their potential in terms of their
assessed performance they need to become assessment literate
Assessment literacy encompasses:

an appreciation of assessment’s relationship to learning;

a conceptual understanding of assessment (i.e. understanding of the
basic principles of valid assessment and feedback practice, including the
terminology used);

understanding of the nature, meaning and level of assessment criteria
and standards;

skills in self- and peer assessment;

familiarity with technical approaches to assessment (i.e. familiarity with
pertinent assessment and feedback skills, techniques, and methods,
including their purpose and efficacy); and

possession of the intellectual ability to select and apply appropriate
approaches and techniques to assessed tasks (not only does one have
the requisite skills, but one is also able to judge which skill to use when,
for which task).
(Price et al, 2012 pp10-11)
Key aspect to developing
students’ assessment literacy
Self and peer assessment need to be seen
as essential graduate attributes
(i.e. learning outcomes themselves, rather
than processes)
Feedback needs to be seen as a dialogue
(rather than a monologue)
… with an explicit intention to bring students
into the community of assessment practice
References
Angelo, T. (1996) Transforming assessment: high standards for higher learning, AAHE
Bulletin, April, 3–4.
Black, P. & Wiliam, D. (1998) Assessment and classroom learning. Assessment in
Education, 5(1), 7–74.
Biggs, J. (1999) Teaching for quality learning at university. Buckingham: SRHE & Open
University Press
Brown, S., Rust, C. and Gibbs, G. (1994). Involving students in the assessment process, in
Strategies for Diversifying Assessments in Higher Education, Oxford: Oxford Centre for
Staff Development, and at DeLiberations http://www.lgu.ac.uk/deliberations/ocsdpubs/div-ass5.html
Brown G, Bull J & Pendlebury M. (1997). Assessing student learning in higher education.
London: Routledge
Brown, S. and Knight, P. T. (1994). Assessing Learners in Higher Education. London: Kogan
Page.
Catley, P. (2004). "One lecturer's experience of blending e-learning with traditional teaching
or how to improve retention and progression by engaging students." Brookes eJournal of
Learning and Teaching 1(2).
Chickering, A. W. and Gamson, Z. F. (1987). "Seven Principles for Good Practice in
Undergraduate Education." AAHE Bulletin, March 1987: 3-7.
Forbes, D. A. & Spence, J. (1991). An experiment in assessment for a large class, in: R.
Smith (Ed.) Innovations in engineering education. London: Ellis Horwood.
Gibbs, G. & Simpson, C. (2002). Does your assessment support your students’ learning?
Available online at: www.brookes.ac.uk/services/ocsd/1_ocsld/lunchtime_gibbs.html
(accessed 30 November 2002).
References cont’d
Gibbs, G (1992). Improving the quality of student learning, Bristol: TE
Hattie, J. A. (1987) Identifying the salient facets of a model of student learning: a synthesis
of meta-analyses. International Journal of Educational Research, 11, 187–212.
Johnson, D., Johnson, R. and Smith, K. (2007). "The State of Cooperative Learning in
Postsecondary and Professional Settings." Educational Psychology Review 19(1): 15-29.
Johnson, D. W. and Johnson, R. T. (1999). Learning Together and Alone: Cooperative,
Competitive and Individualistic Learning (Fifth Edition). Needham Heights, Ma: Allyn and
Bacon.
Laming, D. (1990) The reliability of a certain university examination compared with the
precision of absolute judgements, Quarterly Journal of Experimental Psychology Section
A—Human Experimental Psychology, 42(2), 239–254.
Nicol, D. J. and Macfarlane-Dick, D. (2006). "Formative assessment and self-regulated
learning: a model and seven principles of good feedback practice." Studies in Higher
Education 31(2): 199-218.
Ramsden, P. (1992). Learning to teach in higher education. London: Routledge
Race, P. (2001). "Assessment Series No.9: A Briefing on Self, Peer and Group
Assessment." Retrieved 20 April, 2012, from
http://www.heacademy.ac.uk/resources/detail/resource_database/SNAS/A_Briefing_on_
Self_Peer_and_Group_Assessment.
Rust, C. (2001). A briefing on assessment of large groups. LTSN Generic Centre
Assessment Series,12, York: LTSN.
Zeller, A. (2000). Making Students Read and Review Code. [Online] Retrieved from
http://portal.acm.org/ft_gateway.cfm?id=343090&type=pdf 19 April 2011.