Gender preferences in assessment:Do men prefer a `one

Gender preferences in assessment:
Do men prefer a ‘one-off’ whilst women prefer to ‘keep at it’?
Andy Ballard (corresponding author)
Faculty of Business, AUT University, Private Bag 92006, Auckland 1142, New Zealand
Email: [email protected]
Rowena Sinclair
Faculty of Business, AUT University, Private Bag 92006, Auckland 1142, New Zealand
Email: [email protected]
Preferred Stream:
Stream 7
Presenter Profile:
Andy Ballard is a senior lecturer in and manager of the Centre for Business
Interdisciplinary Studies at AUT University. Andy is the recipient of two
university-wide teaching awards. His research interests range from
management education to work/family conflict.
Gender preferences in assessment:
Do men prefer a ‘one-off’ whilst women prefer to ‘keep at it’?
Using data from 100 undergraduate business students, this paper analyses students’ assessment
preferences. Independent variables of gender, age, stage of degree and major were tested for effects
on student preferences for exams or coursework. Specific preferences for exams or coursework in
terms of a tool for progress feedback, measuring abilities, highlighting content importance, receiving
lecturer direction and performing highly were also investigated. Gender differences were minimal but
age of students and stage of study were significant factors. Generally students showed a clear
preference for coursework on all criteria. This paper concludes that a move towards coursework
would be likely to enhance student motivation and attitudes towards higher education.
Keywords:
Performance assessment, management education
Assessment forms an integral part of teaching and learning strategies in higher education, and there
has been much debate on associated assessment issues. Many different factors can affect assessment
preference including gender, age and discipline. Several studies (e.g., Adams, Thomas, & King, 2000;
Kniveton, 1996; Woodfield, Earl-Novell, & Solomon, 2005) focussed on gender. Woodfield et al.
(2005), considered that the debate about gender and mode of assessment tends to emphasise the
suitability of females for continuous assessment and males for examination, and indeed this has long
been claimed in the non-academic press. It has been suggested that men perform better in a typical
high pressure exam situation, while women underestimate their abilities, experience higher levels of
anxiety and hence have a tendency to perform worse in exams (Martin, 1997). However, it is difficult
to find any support for the view that there will be a difference in assessment performance between
men and women in coursework. Several studies (e.g., Adams et al., 2000; Clapham & Fulford, 1997;
Kniveton, 1996; Lamberton, Fedorowicz, & Roohani, 2005; Lowry, 1993; Sadler-Smith & Riding,
1999; Simonite, 2003; Woodfield et al., 2005) found that age is also considered to be one factor that
contributes to student’s perceptions toward assessment. For example, mature males seem to see more
advantage in coursework than others. Similarly, several authors (e.g., Birenbaum, 1997; Bridges et al.,
2002; Gammie, Paver, Gammie, & Duncan, 2003; Lamberton et al., 2005; Latham, 1997; Sander,
Stevenson, King, & Coates, 2000; Simonite, 2003; Smith & Miller, 2005) found that there is a
correlation between disciplinary groups and assessment preference and their relationship towards
learning strategies.
Since one of the key elements of assessment is to ensure fairness and balance, if there is a difference
between genders in the preference of the mode of assessment, questions may arise in regards to equity.
In other words, does the mode of assessment favour one gender over another?
This research seeks to determine whether there are any gender or non-gender factors (e.g. age &
discipline studied) that affect students’ preference for assessment modes. This expands on previous
research by considering a wider range of variables that may impact preference. It also emphasises
Page 1
preference rather than performance. The research was conducted at the Business School of a New
Zealand university.
An important aspect of this research was to define the two key terms, exams and coursework, for the
respondents. This was not straightforward as several assessments could have components of both. For
example, oral presentations have elements of coursework (the preparation) and of examination (the
delivery). For this paper we adopted the definition used by Bridges et al. (2002) wherein coursework
includes the preparation of major dissertations, essays, field and laboratory reports, creative artwork,
software, seminars and various forms of class tests. This led to the following definitions on
questionnaire instructions; coursework is non-supervised, lecturer-graded essays, reports, and
portfolios; exams are any assessment done under time constraints in a supervised setting.
Hypotheses
While Adams et al. (2000) claim that there are significant differences between genders for assessment
preference, Birenbaum (1997) and Woodfield et al. (2005) could find no significant differences in
assessment preference despite citing many articles claiming their existence, and Gammie et al. (2003)
could find no significant differences in performance either. However Adams et al. (2000) found
significant differences between genders in undergraduate business students. This leads to the first
hypothesis, H1.
H1: There will be gender differences in assessment preference.
Clapham et al. (1997) found significant differences between younger and older candidates in their
performance at assessment centres, while Lowry (1993) found that although age was significant the
effect size was tiny. Kniveton (1996) found that mature males and younger females saw more
advantages in continuous assessment than did others. This leads to the following hypothesis.
H2: There will be age differences in assessment preference.
Woodfield et al. (2005) hinted at the importance of stage of study in their longitudinal survey, but
reported no results relating to that variable, which suggests the following hypothesis.
H3: There is no difference between stages of degree for assessment preference.
Latham (1997) reported minimal subject effects on assessment performance between males and
females in a high school setting. Simonite’s (2003) research shows some differences in assessment
performance in certain subject groupings such as molecular sciences and law, but found no differences
in business. Gammie (2003) found no differences in performance by assessment type in different
accounting papers. In contrast to this, Bridges et al. (2002) found consistent differences in
performance between coursework and examinations across different modules and subjects. The above
findings lead to hypothesis H4.
Page 2
H4: There will be a difference between students with different majors for assessment preference.
Controlling for the above possible effects of gender, age, stage and major leads to hypothesis H5.
H5: There is a general difference between student preference for exams and preference for
coursework.
The next four hypotheses (H5 to H8) grew from the work of Adams et al. (2000) who found that
students prefer coursework to exams because of the feedback coursework gives them on their
progress; the way coursework provides a measurement of their abilities; the way coursework
emphasises the importance of particular content; and the way coursework indicates the direction the
lecturer wishes them to take.
H6: Students will prefer coursework over exams as a means of giving them feedback on their progress.
H7: Students will prefer coursework over exams as a means of measuring their abilities.
H8: Students will prefer coursework over exams as a means of emphasising the importance of
particular content.
H9: Students will prefer coursework over exams as a means of ensuring that they learn what their
lecturer wants them to.
METHOD
Participants
There were 100 business students, of whom 47 were male and 53 were female. The ages of students
ranged from under 20 to over 31. Forty-eight of the students were in their first year, 17 were in their
final year and 35 were in an intermediate year. Students were predominantly majoring in Marketing,
Advertising or Accounting, or had not yet selected a major. All students responded to every question
in the survey, and there were no unusable responses.
Materials
The questionnaire consisted of a series of seven-point end-defined Likert scale questions, which
provided a reasonable balance between reliability, validity, discriminating power and respondent
preference for question type. Students answered demographic questions on a nominal scale. Refer to
Appendix 1 for the questionnaire used.
Procedure
After successfully pre-testing our questionnaire we used convenience sampling to select the sample.
Most of the students were surveyed while they were waiting outside classrooms in the Faculty of
Page 3
Business building, some were surveyed in the hallways, and some at the café at the base of the
building. We conducted the surveys at various times of the day.
Participants were informed of the purpose of this research, and were assured of privacy, anonymity
and confidentiality. Both of the authors were lecturers within the Faculty of Business so two research
students conducted the surveys to avoid acquiescence bias arising from the power relationship
between lecturers and students.
Limitations of research design
Sample bias is apparent because we did not conduct the surveys in the evenings which may have
reduced the number of part-time, mature students. The research was also limited to BBus students at
one New Zealand University. Whilst numerous studies (e.g., Birenbaum, 1997; Sadler-Smith &
Riding, 1999; Sander et al., 2000) determined that the cognitive style is important in determining
assessment preference, cognitive testing would have added greatly to the complexity and number of
questions and was not included in the current study. Other studies (Lamberton et al., 2005; Simonite,
2003) considered whether students’ home country was a factor in assessment preference, but they
found no evidence to suggest significance and we did not include it in the current study.
RESULTS
Validity
To determine if the two subscales (exam and coursework) were indeed present, a factor analysis was
performed using Principal Component Analysis. This showed two clear subscales, one for the exam
factor and one for the coursework factor. This analysis further showed that over 73% of the variance
was accounted for by just three variables (Q3b, Q2b, Q2a). Sixty-four percent of the exam subscale
variance can be explained by one variable, Q2a (feedback exam). Sixty-five percent of the coursework
subscale can be explained by one variable, Q3b (abilities coursework).
To determine the internal validity of each of the subscales, the mean of each of the five exam related
questions was correlated with the overall preference for exams. This showed a very high correlation of
0.830 which indicates acceptable internal validity for the exam subscale. Subsequently the mean of
each of the five coursework related questions was correlated with the overall preference for
coursework. This showed a very high correlation of 0.78 which indicates acceptable internal validity
for the coursework subscale.
Reliability
The exam subscale had a Cronbach’s alpha of 0.85 indicating acceptable reliability. In addition, all
five items contributed positively to the exam subscale. The coursework subscale had a Cronbach’s
alpha of 0.86 indicating acceptable reliability. In addition, all five items contributed positively to the
Page 4
coursework subscale. To test the reliability of the overall scale, we reverse coded the coursework
questions. This approach is supported by the assertion that coursework and exams are the opposite
ends of a linear continuum. The overall scale had a Cronbach’s alpha of 0.86 indicating acceptable
reliability. In addition, all ten items contributed positively to the overall scale.
Hypothesis testing
H1: There will be gender differences in assessment preference.
There is support for this hypothesis. The results from the ANOVA with gender as the independent
factor show that there is a fairly small but significant (0.036) difference between male and female
student’s overall preference for exams, with males indicating a higher preference for exams (M=4.09)
than females (M=3.42). In addition there is a significant difference (0.020) between males’ and
females’ preference for exams as a means of directing their attention towards what matters, with males
expressing a higher preference for exams (M=3.74) than do females (M=3.08) on this measure. It is
worth noting though that on all measures both men and women prefer coursework to exams; the
gender differences are only in the degree of preference, not the direction of the effect.
H2: There will be age differences in assessment preference.
There is support for this hypothesis. The results from the ANOVA with age as the independent factor
show there is a significant (0.000) difference between students aged less than 20 and those 20 and over
in the student’s overall preference for exams, with older students indicating a higher preference for
exams (M=3.86) than younger students (M=2.47). In addition older students rate their performance in
exams significantly (0.001) more highly (M=3.74) than do younger students (M=2.65). Conversely
older students rate their performance in coursework significantly (0.020) less highly (M=5.09) than do
younger students (M=5.79). Finally, there is a significant (0.018)difference in younger and older
students’ preferences for coursework as a means of measuring their abilities, with younger students
recording a mean of 3.32 compared to a mean of 4.61 for older students. It is worth noting though that
at all ages men and women prefer coursework to exams; the age differences are only in the degree of
preference.
H3: There is no difference between stages of degree for assessment preference.
There is no support for this hypothesis and it is rejected. The results from the ANOVA with stage as
the independent factor show there is a significant (0.000) difference between first year students’ and
other students’ overall preference for exams, with first year students indicating a lower preference for
exams (M=2.79) than other students (M=3.94). In addition first year students rate their performance in
exams significantly (0.001) lower (M=2.79) than do other students (M=3.90). Again it is worth noting
though that at all stages men and women prefer coursework to exams; the stage differences are only in
Page 5
the degree of preference. A univariate analysis of variance showed no significant interaction between
age and stage.
H4: There will be a difference between students with different majors for assessment preference.
There is no support for this hypothesis. The results from the ANOVA with major as the independent
factor show there are no significant effects. H4 is rejected.
H5: There is a general difference between student preference for exams and preference for
coursework.
There is support for this hypothesis. The results from the paired-sample t-test of the difference
between mean preference for coursework and mean preference for exams show a difference in means
of 1.43 and a two-tailed significance of 0.000, indicating that students prefer coursework to exams.
H6: Students will prefer coursework over exams as a means of giving them feedback on their
progress.
There is support for this hypothesis. The results from the paired-sample t-test of the difference
between mean preference for coursework and mean preference for exams as a means of giving
feedback on progress show a difference in means of 1.58 and a two-tailed significance of 0.000,
indicating that students prefer coursework to exams as a means of giving them feedback on their
progress.
H7: Students will prefer coursework over exams as a means of measuring their abilities.
There is support for this hypothesis. The results from the paired-sample t-test of the difference
between mean preference for coursework and mean preference for exams as a means of measuring
student abilities show a difference in means of 1.28 and a two-tailed significance of 0.000, indicating
that students prefer coursework to exams as a means of as a means of measuring their abilities.
H8: Students will prefer coursework over exams as a means of emphasising the importance of
particular content.
There is support for this hypothesis. The results from the paired-sample t-test of the difference
between mean preference for coursework and mean preference for exams as a means of directing
student attention to what content is significant show a difference in means of 1.18 and a two-tailed
significance of 0.000, indicating that students prefer coursework to exams as a means of emphasising
the importance of particular content.
H9: Students will prefer coursework over exams as a means of ensuring that they learn what their
lecturer wants them to.
There is support for this hypothesis. The results from the paired-sample t-test of the difference
between mean preference for coursework and mean preference for exams as a means of ensuring the
Page 6
student learns what the lecturer wants show a difference in means of 1.18 and a two-tailed significance
of 0.000, indicating that students prefer coursework to exams as a means of ensuring that they learn
what their lecturer wants them to.
Effect Sizes
Calculations of effect sizes show that the significant differences identified between means in
hypotheses 5 and 6 have large effect sizes. Those for hypotheses 7, 8 and 9 have medium-large effect
sizes. Those for hypotheses 2 and 3 have medium effect sizes and the difference for hypothesis 1 has a
small effect size. According to Valentine and Cooper (2003), education is likely to have smaller effect
sizes than other behavioural or social sciences, hence the above effects could all be considered larger
than the calculated effect sizes suggest.
DISCUSSION
Research has been devoted to the “influence of assessment” (Strachan, 2002, p. 252), but there is still
a gap as to how student preferences relate to outcomes (Furnham & Chamorro-Premuzic, 2005).
Within the Bachelor of Business degree at the New Zealand university used in this study the majority
of papers use both examination and coursework assessment which are generally evenly weighted
(AUT, 2006). This model has been argued to be dissatisfactory and simplistic, as it ignores all of the
assessment methods that are available and that may best “enable students to demonstrate their learning
achievements” (Heywood, 2000) as cited in (Bridges et al., 2002, p. 47). A correlation between “self
belief and attitudes toward academic assessment methods” has been discovered in previous research
(Furnham & Chamorro-Premuzic, 2005, p. 1970), suggesting that using assessment methods that are
preferred by students will improve their self-belief. Overall, the result of high internal validity and
reliability for both the coursework and exam subscales confirms that the results of the study can be
used as evidence to support other research.
The primary aim of this study was to investigate whether there are any significant differences between
male and female students and their assessment preferences. We have found through the analysis of
previous research that in some circumstances there are small differences between genders, but other
factors will have a greater impact.
The quantitative data gathered in this study confirms a small but significant difference in gender with
males being slightly less opposed to exams than females. Overall though, we found a preference for
coursework by both male and female students, and gender was not a major determinant. This agrees
with the majority of the existing research (Birenbaum, 1997; Gammie et al., 2003; Latham, 1997;
Woodfield et al., 2005), although in another study significant differences were found (Adams et al.,
2000).
Page 7
It is highly likely that both modes of assessment will have been experienced by the respondents in this
study during some stage of their academic studies, but the amount of time that a person has studied at
University level will have an affect on their assessment preferences. For example, a first year student
that has come straight from school could be giving their perspective based on their experiences at high
school level, whereas a mature student would be more prone to giving a response that reflects more
recent experiences. For students that have come directly from School, the experiences developed
through the New Zealand National Certificate in Educational Achievement (NCEA) programme is
likely to cause a difference in responses compared to students that sat Cambridge exams
(Vlaardingerbroek, 2006), and again are likely to differ from students that graduate from overseas high
schools. The NCEA has been fraught with difficulties since its introduction in 2002 with the transition
from secondary school to university reported as being particularly problematic. Examples of unclear
processes and misinformation communicated to secondary students in their final year of school have
been evidenced as some of the major problems. This is likely to have an affect on the experiences and
therefore opinions of students (Vlaardingerbroek, 2006).
The premise of different stages affecting responses could be assumed to have a correlation to age,
because testing experience could lead to age bias (Clapham & Fulford, 1997). A correlation between
age and stage was not found. The testing experience of a student may have an affect on the responses
given; a third year student would be more likely to perform better in an exam in comparison to a first
year. The results of this study support this theory, as we have found that first year students rate their
performance in exams lower than students in other years. Other factors that may be affecting
performance could include whether the university has effectively adapted their assessments to take
into account the changes that have occurred recently in high school assessment methods. Initial
research has shown that NZ universities are not yet in a position for a “seamless structure” to occur
because “[school] achievement standards are completely different to the way a university assesses its
students” (Vlaardingerbroek, 2006, p. 82). As the quantitative analysis shows that under 20 year olds
do not believe they perform as well on assessments, universities may need to review their current
assessment structure and question whether under are being disadvantaged by the existing system.
This study determined that a student’s major (discipline) is not significant when it comes to the
students’ preferences. This differs from Becher (1994) and could be due to the fact that the groupings
within the Business Degree programme are not as broad as Becher’s categories, and more of a contrast
between disciplines would be required to uncover significant results.
The results show that students in general all perceived coursework to give them better feedback on
their progress. Whilst the findings seem straight forward, they may also be attributed to other
unknown factors such as student’s possibly defining feedback as written feedback. Because
coursework is generally marked in association with written comments, and examinations are normally
marked with little or no wording, students may link ‘feedback’ as being equal to ‘written comments’.
Page 8
They would then come to the conclusion that coursework gives them better feedback than exams,
which is what the responses in the survey show.
This study also agrees with earlier evidence that higher marks are achieved in coursework than
through formal examinations. According to Bridges et al. (2002) this is because of greater access to
resources with coursework (less reliance on memory), additional help with coursework (mentor,
lecturer, collusion, or plagiarism) and lecturers are more likely to award higher marks to encourage
students.
The students also reported a preference toward coursework over exams as a means for lecturers to
direct students’ attention toward what matters, and as a way of ensuring the student has learnt what the
lecturer wants.
FURTHER RESEARCH
The current study has highlighted some possible avenues for future research. Mature students could be
specifically surveyed to investigate their perceptions, as they have not really been represented in the
current study. The educational background of the students could also be considered. In New Zealand,
students tend to study NCEA, Cambridge A-level or International Baccalaureate. These different entry
points could be compared with one another, and perhaps with overseas qualifications to see how they
affect student preferences. There is obvious scope for extending the current study to see how student
preference for assessment type influences institutional assessment policy. Lecturer perceptions have
not been considered, yet positioning the lecturer as the pedagogical expert suggests that the
determinants of lecturer preference for assessment type might be an interesting topic for consideration.
A longitudinal study would be useful to evaluate whether students change their views as they gain
experience in different assessment types. Learning style could also be included as a factor in future
studies. Assessment preferences could be related to actual results gained rather than to students’
subjective evaluations of their performance. Finally, the study could be broadened to other tertiary
institutions overseas, to investigate whether student preference is a measure that varies between
countries.
CONCLUSION
This study was originally designed to examine the relationship between gender preference and
assessment modes. The study reveals that although males tend to prefer examinations in comparison to
females, the difference tends to be minimal. There is little in our findings to support the view that there
is a significant difference between genders in preference towards assessment modes. In addition, it is
clear from this study that students’ attitudes toward coursework are generally positive compared to
exams. In general, students’ rate coursework higher and believe coursework is the better means of
providing feedback, measuring students’ abilities, directing students on important matters and
ensuring students learn what the lecturer wants.
Page 9
Age and stage of study are contributing factors affecting student’s preference in assessment mode. For
example while students over 20 years old rate their performance as being higher in examinations,
students in the first year tend to prefer coursework. Although the results between age and stage of
study tend to be similar, there is no interaction between the two of them. On the other hand it can be
assumed that there is a correlation between difference stages and age on the ground of assessment
experience. Thus, students who have gained more practice and experience on how to study for exams
are more likely to respond more positively to exams compared to new entrants.
In summary, this study contributes to the research in several ways. As most of the previous literature
in this area of research comes from a European and United States environment, this study contributes
to the literature in a New Zealand context of education. Finally, according to Adams et al. (2000),
modern consumer attitudes will produce an increase in students’ awareness of their rights as
consumers. Understanding the views of assessment from a student’s perspective enhances the
likelihood of increased student motivation and interest towards higher education.
Page 10
REFERENCE LIST
Adams, C., Thomas, R., & King, K. (2000). Business students' ranking of reasons for
assessment: gender differences. Innovations in education and training international,
37(3), 234-243.
AUT.
(2006).
Paper
Outlines.
Retrieved
21
September,
2006,
from
http://www.aut.ac.nz/schools/business/undergraduate_programmes/bachelor_of_busin
ess/paper_outlines.htm
Becher, T. (1994). The significance of disciplinary differences. Studies in higher education,
19(2), 151-161.
Birenbaum, M. (1997). Assessment preferences and their relationship to learning strategies
and orientations. Higher Education, 33, 71-84.
Bridges, P., Cooper, A., Evanson, P., Haines, C., Jenkins, D., Scurry, D., et al. (2002).
Coursework marks high, examination marks low: discuss. Assessment & Evaluation in
Higher Education, 27(1), 35-48.
Clapham, M. M., & Fulford, M. D. (1997). Age Bias in Assessment Center Ratings. Journal
of Managerial Issues, 9(3), 373-388.
Furnham, A., & Chamorro-Premuzic, T. (2005). Individual Differences and Beliefs
Concerning Preference for University Assessment Methods. Journal of Applied Social
Psychology, 35(9), 1968-1994.
Gammie, E., Paver, B., Gammie, B., & Duncan, F. (2003). Gender differences in accounting
education: an undergraduate exploration. Accounting Education, 12(2), 177-196.
Kniveton, B. (1996). Student perceptions of assessment methods. Assessment & Evaluation in
Higher Education, 21(3), 229-240.
Lamberton, B., Fedorowicz, J., & Roohani, S. J. (2005). Tolerance for Ambiguity and IT
Competency among Accountants. Journal of Information Systems, 19(1), 75-95.
Latham, A. S. (1997). Gender Differences on Assessments. Educational Leadership, 55(4),
88-90.
Lowry, P., E. (1993). The Assessment Center: An Examination of the Effects of Assessor
Characteristics on Assessor Scores. Public Personnel Management, 22(3), 487-502.
Martin, M. (1997). Emotional and cognitive effects of examination proximity in female and
male students. Oxford Review of Education, 23(4), 479-486.
Sadler-Smith, E., & Riding, R. (1999). Cognitive style and instructional preference.
Instructional Science, 27(5), 355-371.
Sander, P., Stevenson, K., King, M., & Coates, D. (2000). University students' expectations of
teaching. Studies in higher education, 25(3), 309-323.
Page 11
Simonite, V. (2003). The Impact of Coursework on Degree Classifications and the
Performance of Individual Students. Assessment & Evaluation in Higher Education,
28(5), 459-470.
Smith, S., Noi, & Miller, R., J. (2005). Learning approaches: examination type, discipline of
study and gender. Educational Psychology, 25(1), 43-53.
Strachan, J. (2002). Assessment in Change: Some Reflections on the Local and International
Background to the National Certificate of Educational Achievement (NCEA). New
Zealand Annual Review of Education(11).
Valentine, J., & Cooper, H. (2003). Effect Size Substantive Interpretation Guidelines: Issues
in the Interpretation of Effect Sizes. Washington DC: What Works Clearinghouse.
Vlaardingerbroek, B. (2006). Transition to tertiary study in New Zealand under the National
Qualifications Framework and 'the ghost of 1888'. Journal of Further and Higher
Education, 30(1), 75-85.
Woodfield, R., Earl-Novell, S., & Solomon, L. (2005). Gender and mode of assessment at
university: should we assume female students are better suited to coursework and
males to unseen examinations? Assessment & Evaluation in Higher Education, 30(1),
35-50.
Page 12
APPENDIX 1: QUESTIONNAIRE
Preferences for assessments
This survey has been designed to provide us with information about students’ assessment preferences
at AUT University. The information gathered may be used for publication.
The survey will remain completely confidential and no personal information will be recorded.
Therefore the results will not be traceable back to you. The survey will only take about five minutes.
Would you mind completing this?
Definitions
We define coursework to be non-supervised, lecturer-graded essays, reports, and portfolios.
We define exams to be any assessment done under time constraints in a supervised setting.
1.
Are you currently enrolled in an AUT University business degree
paper? (circle one)
(If No, thank you for your time; you do not meet the criteria for this
study)
No
Yes
Strongly
agree
Strongly
disagree
2.
Feedback on my progress
a
In order to give me feedback on my progress, my favourite form of
assessment is an exam. (circle one)
In order to give me feedback on my progress, my favourite form of
assessment is coursework. (circle one)
Measurement of my abilities
1
2
3
4
5
6
7
1
2
3
4
5
6
7
In order to measure my abilities, my favourite form of assessment is an
exam. (circle one)
In order to measure my abilities, my favourite form of assessment is
coursework. (circle one)
Content importance
1
2
3
4
5
6
7
1
2
3
4
5
6
7
In order to direct my attention to what matters, my favourite form of
assessment is an exam. (circle one)
In order to direct my attention to what matters, my favourite form of
assessment is coursework. (circle one)
Lecturer direction
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
6.
In order to ensure I learn what my lecturer wants me to learn, my
favourite form of assessment is an exam. (circle one)
In order to ensure I learn what my lecturer wants me to learn, my
favourite form of assessment is coursework. (circle one)
Performance
a
I usually achieve higher marks in exams. (circle one)
1
2
3
4
5
6
7
b
I usually achieve higher marks in coursework. (circle one)
1
2
3
4
5
6
7
7.
Overall preference
a
Overall my favourite form of assessment is an exam. (circle one)
1
2
3
4
5
6
7
b
Overall my favourite form of assessment is coursework. (circle one)
1
2
3
4
5
6
7
b
3.
a
b
4.
a
b
5.
a
b
Page 13
DEMOGRAPHICS
Gender: Male / Female (circle one)
Age: < 20 / 20-22 / 23-25 / 26-30 / 31 + (circle one)
Stage of degree: First year / Final Year / Other (circle one)
Your Major and Programme _______________________________________________
E.g. Accounting in Bachelor of Business
FEEDBACK
Thank you very much for helping to make this study possible. If you have any queries please contact either
[email protected]
or
[email protected]
Page 14