Peer/collaborative e-‐assessment case studies

Peer/collaborative e-­‐assessment case studies We have collected different case studies descriptions and settings available online or from the
literature and submitted to the online survey available at:
https://docs.google.com/forms/d/11rEtD-oE2D1ClcJ_4aaCWDKVeFjA6VhRxVyrojmhEg0/viewform
Each case study is briefly described, analysed and discussed.
Case 1: Using Google drive for peer-­‐assessment Technology: Google drive (Form and Spreadsheet)
Type of assignment: Writing essay
Type of assessment: Formative
From http://gettingsmart.com/2012/12/the-sidekick-the-superhero-using-google-drive-for-peerassessment/
This case study is conducted in AP language classes (but can be extended and applied to any
discipline) for essay writing assignments. The ICT tool involved is Google Drive and more
precisely Goole Form and Google Spreadsheet (both tools are working together as Form
results are stored in Spreadsheet documents.
Students have to write consecutively 3 free response essays in 3 weeks. Each essay
assignment is based on a four days scenario. Each essay is peer assessed at least two times.
The assessment is conducted in double blind, using a numbering system for the essays and the
students. The allocation of essays to reviewer students is random. Reviewer students have to
fill in a review form available through the Google Form service. The review form includes
grades (such as “rate the thesis statement”) and arguments (such as “review the student thesis
statement” or “suggest how it may be improved”). The resulting Google Spreadsheet is
publicly shared among students and they are requested to consult the feedbacks for their essay
as homework.
The assignment is concluded with a self-assessment to review peers’ constructive remarks and
critics. The self-assignment is submitted as another Google Form. This last assessment is not
shared among students of the class.
The advantages identified are:
• Students are clear with what is expected for the assignment.
• Students get a diagnosis about their strengths and weaknesses in writing.
• Students can be actively involved in their own learning.
No particular disadvantages or requirements are mentioned.
Case 2: Developing an essay through peer-­‐review on a discussion board Technology: Discussion board
Type of assignment: Essay writing
Type of assessment: Formative
From http://serc.carleton.edu/introgeo/peerreview/examples/sharks.html
Before writing their essay, students are helped to develop the topic they have chosen for their
essay through a peer-assessment exercise that is ran on a discussion board. The student has
first to answer a few questions about the essay topic in a discussion board. Two peers have
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
27
then to evaluate the student’s answers. Finally, the student has to answer the peers’
comments.
No particular advantage or drawback is indicated for this case apart from the development of
students’ electronic communication skills and constructive feedback capacity.
Case 3: An assignment using anonymous electronic peer review with a Dropbox Technology: Dropbox
Type of assignment: Essay writing
Type of assessment: Formative
From http://serc.carleton.edu/introgeo/peerreview/examples/warming.html
Students write their essay anonymously, without indicating their name or any other
information that could help to recognize them. They upload their essay documents onto the
Dropbox and send the address to the teacher. The teacher can then assign the essay to be
reviewed by one or two peers using a criteria grid. The final mark can be assigned based on
the average of the peers’ marks.
The only advantage indicated for this case is that peer reviewing and assessment avoid
teachers to assess students’ work themselves.
The author suggests 1) having a test peer review assessment in class to train students; 2)
providing students with clear explanations about the assignment and the peer review process
(including the review form itself).
Case 4: Calibrated peer assignment Technology: CPR online web software
Type of assignment: Writing essay
Type of assessment: Formative/summative
From http://serc.carleton.edu/introgeo/peerreview/cpr.html,
http://serc.carleton.edu/introgeo/peerreview/examples/dinosaurs.html,
http://serc.carleton.edu/introgeo/peerreview/examples/why_study_geo.html,
http://serc.carleton.edu/introgeo/peerreview/examples/petroleum.html
Calibrated peer assessment is a process organized in four successive steps:
1) Assignment: The student writes an essay and submits it.
2) Peer assessment training or calibration: the student has to peer review 3 example
essays: calibration essays that have already been evaluated by teachers using a rubric
form. If the calibration test meets the requirements, the student can move to the next
step. Otherwise, the student has to pass a second peer assessment trial.
3) Peer assessment: The student has to assess and grade three peers’ essays. If the student
failed the first calibration evaluation, his/her impact of the peers’ mark is lowered.
4) Self-assessment: The student self-assess his/her own essay.
The introduction of calibration in peer assessment is particularly important. It trains students
to review and assess their peers.
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
28
The advantages expressed in the different related cases include: promotion of critical
thinking, improvement of writing skills from one assignment to the next one.
Case 5: Getting to know Coursera: peer assessments Technology: Coursera
Type of assignments: NA
Type of assessment: Summative
From http://cft.vanderbilt.edu/2013/01/getting-to-know-coursera-peer-assessments/
http://cft.vanderbilt.edu/2012/11/getting-to-know-coursersa-assessments/
This case study is more a critical analysis of the way peer assessment is implemented inside
Coursera, a xMOOC platform than a real case study. xMOOCs are a particular type of
Massively Open Online Courses. They propose a model of teaching based on the traditional
teacher-centric approach with the purpose to provide teaching to huge online classes (Figure 4).
In the next case we will review an current experiment conducted inside Coursera that aims at
addressing some of the issues raised in this one.
Figure 4 – x MOOCs and cMOOCs (figure from Wikipedia)
However, it gives some good feedback about the problems to consider when applying peer
assessment for very large online classes. With respect to traditional classes, one must notice,
that MOOCs are completely online with distant students from all over the world. This
situation introduces many more constraints for teaching in general but also for peer
assessment. Peer assessment is particularly critical with respect to the context of MOOC. It is
indeed impossible to believe that the teachers and tutors staff can provide assessment and
feedback to thousands of online students. As raised in the state of the art section of this
document, peer assessment can be viewed as a possible solution for large classes. The author
translates this into "Who, after all, has got the time to read 10,000 essays? The answer, for
Coursera at least, is other students." In other words, peer assessment is the only way
platforms such as Coursera can cope with assessing thousands of essays, which results in
crowdsourcing assessment. According to the author, “the model of peer assessment supported
by Coursera folds together two assumptions: that peers can approximate or replace the kinds
of substantive, constructive expert feedback critical to deeper understanding and that a grade
is necessary to learn, full stop".
The main issues raised by the authors are:
• Students have to learn to grade
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
29
• Grading peers requires lots of efforts from students
• What is the outcome of peer grading for students
The author brings a particular importance to anonymity and privacy. As peer feedback on
Coursera is anonymous, follow-up on a comment and discussions are mainly impossible.
Therefore, how to create learning communities if peers are not accountable for their
feedbacks?
Similar feedbacks are available in http://hackeducation.com/2012/08/27/peer-assessment-coursera/
• the variability of feedback,
• the lack of feedback on feedback,
• the anonymity of feedback,
• the lack of community.
Case 6: First massive-­‐scale class with self and peer assessment in Coursera Technology: Coursera
Type of assignment: Project/problem-based
Type of assessment: Summative
From http://hci.stanford.edu/research/assess/
This case study describes an experiment to introduce peer assessment in an xMOOC. This
strategy has been implemented inside Coursera3. As raised by the authors, "providing
feedback and assessment of design and other creative work is extremely time consuming -this bottleneck is the major capacity constraint for scaling peer assessment". In this example,
students are not only grading peers, but they are trained to prior to effectively grade them. The
proposed method is based on an existing method called “calibrated peer assessment” where
students learn grading through training examples before grading their peers. The peer
assessment is combined with self-assessment.
The objectives of the peer assessment strategy are: 1) training students to assess others
accurately; 2) define a grading system robust to errors; 3) provide qualitative and personalized
feedback to students.
The authors use rubrics to grade. Rubrics are exemplified in Figure 5. Each row corresponds to
a rubric and each cell corresponds to a level of performance. Assessing is mainly numeric,
with very few text for feedback (as language is an issue for a MOOC).
3
http://www.coursera.org/
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
30
Figure 5 – Rubrics-based grading form
Students have first to train assessing. They are submitted examples to assess. They get the
right to assess their peers once they grade the example close to the grading result assigned by
the teaching staff for the example. Each time they perform an example, they get a feedback to
explain them is they are higher, lower or close to the staff grade and why the staff has
assigned the grade.
The peer assessment is a 3 steps process. The student first assesses 5 peers’ assignments.
Among the five assignments, the teaching staff has marked one. It serves as “ground truth”
for comparison between staff grading and students grading. In the next step, the student selfassess his/her own assignment.
For each assignment, a grade is computed as the median grade of the five peers’ assessments.
This peers’ grade is compared to the student grade from the self-assessment. If the student’s
grade is close to the peers’ grade, then the student gets his/her own grade. Otherwise he/she
gets the peers grade. Other strategies can be used, such as assigning the maximum of the two
grades.
Figure 6 – 3 steps peer assessment
The authors use data analysis to guide improvements. They update the rubrics according to
the results in order to clarify them. In term of assessment feedbacks, they propose feedback
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
31
templates. Students are proposed basic feedback templates that they can customize by
completing the template.
From the data analysis, they have noticed that the staff grades correlated with the peers’
grades. They are currently exploring other ways to weight the peers grades than using the
median. They have noticed what they call a “patriotic” grading, where peers tend to grade
their compatriots higher. Grading errors are evaluated by comparing with ground truth grades.
The results show that errors are quite balanced.
One interesting outcome raised by the authors is that the process stimulates collaborative
learning, with students sharing resources, creating assignment aids, answering forum
questions, and providing extra peer assessment…
Case 7: Web-­‐based peer assessment: a case study with civil engineering students Technology: Google drive
Type of assignment: Writing essay
Type of assessment: Formative
From http://online-journals.org/i-jep/article/viewArticle/2411 and
http://www.slideshare.net/gmatos/icl-2012-final
The assignment process takes place in five successive steps:
1) The student selects an article from an online source.
2) The student uploads the article as a Google Drive document.
3) The student summarizes the article.
4) The student analyses the article.
5) The student gives his/her opinion about the article.
Steps 3 to 5 are written in a Google Drive document that is then shared by the author student
with the teacher and with one assigned peer reviewer. The peer reviewer assesses the
document using the comment features of Google Drive and grades the work. The teacher then
review and grades the author student and the peer reviewer. The author student can then
review the feedback and update his/her work. Finally, the teacher reviews the updated online
document to give the final grade.
During the assignment, additional online documents are provided through Google Drive: the
orientation document describes the objectives and tasks to be performed; a table to connect
authors to peer reviewers and a table for the management and coordination of the tasks
between author, peer and teacher.
The authors observed that only a small number of students used their peers’ feedback to
improve their essay.
The main observations are:
• The use of a digital environment to support the assignment and assessment process did
not present any difficulty to the students.
• The support material used to present the assessment process in face to face was very
important for the good achievement of the process.
• There is an obvious need to improve students’ feedbacks and communication skills.
• The teacher’s grade seems to influence the use of peer’s feedback to improve the
work.
• The over evaluation of the teacher’s grade and the small difference between student’s
grade and teacher’s grade question the necessity to have a double intermediate
assessment (peer and teacher).
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
32
Case study 8: Online peer-­‐assessment in a large first-­‐year class Technology: Workshop module (Moodle)
Type of assignment: Writing essay
Type of assessment: Formative
From
http://www.academia.edu/665555/Where_Angels_Fear_to_Tread_Online_PeerAssessment_in_a_Large_First-Year_Class
This case study aims at providing formative feedback to widen participation and develop
writing skills for large and diverse classes (800 students).
The Moodle Workshop module was limited to 3 basic features: 1) the submission and the
random distribution of assignments; 2) the grading and feedback based on grid forms; and 3)
sharing of work with peers’ feedbacks. The peer assessment if only formative and peer
feedbacks are anonymous. As students are used to peer assessment, no example essays are
provided and no self-assessment is requested. At the end of the assignment period, the best
peer scored five essays are publicly published. Students are provided with a rationale about
the advantages of peer assessment and detailed information about the assessment process and
forums were used for scaffolding.
The assessment process itself is organized into 3 main steps: 1) each student submits the first
version of the essay; 2) at least two peers review the essay; and 3) each author student has to
answer the peer feedbacks and submit the final essay. The teaching staff grades the final
essay. Peers’ assessments are not evaluated nor marked. Engagement was expected by
rewarding students: peer assessment replaced one exercise.
Students’ evaluation of the assessment process is mixed, but around ¾ of the students find it
more useful to provide than to receive feedback.
Case study 9: Enquiry-­‐based peer assessment Technology: BlackBoard, ASK (Assignment Survival Toolkit)
Type of assignment: Writing essay
Type of assessment: Formative/summative
From :
http://www.academia.edu/1495031/Online_peer_assessment_helping_to_facilitate_learning_through_
participation
The objective of this case study is to embed enquiry-based learning, information literacy and
e-learning in an peer e-assessment assignment.
According to the authors, in enquiry-based learning, students are working in groups to solve
problems with the help of a wide range of information resources. The teacher intervenes as a
facilitator that enables students to self-regulate their learning.
Thee peer assessment assignment includes four stages and lasts three weeks:
1. Students write a 500 words essay answering a question submitted by the teacher. This
first version of the essay is formatively reviewed in face-to-face with tutors.
2. A second longer version of the essay is then written, which is formatively assessed by
peers on BlackBoard.
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
33
3. Student are then organized in groups and based on ASK (Assignment Survival
Toolkit) that provides resources and individualized step-by-step planning for writing
an essay, they have to write a third full essay. This full essay includes introduction,
body and conclusion. During three weeks, the peers in a group have to reciprocally
submit weekly feedbacks about the essays. Students are taught to submit productive
feedbacks (based on setting the criteria, selecting the evidence, making a judgement).
4. Students have then to review and update their essay to submit a final version of the
essay that is marked.
The results of the students survey raise the importance of feedback both for tutors and peers.
The outcomes come both from the feedbacks received and from the peers’ works reviewed
and assessed. But it also points out the credibility of feedback: some students favour tutors’
feedbacks against peers’ ones or wonder how to trust the feedbacks of people who are at the
same stage of knowledge than themselves.
Another result that arose is that peer assessment facilitates and enhances learning. These
results matches the literature ones that indicate that peer assessment encourages students to
collaborate, share and reflect.
Case study 10: Coursera Peer Assessment -­‐ Writing in the Sciences Technology: Coursera
Type of assignment: Writing essay
Type of assessment: Summative
From http://scienceoftheinvisible.blogspot.co.uk/2012/10/coursera-peer-assessment-writing-in.html
This case study is another MOOCs related example of peer assessment. This one is described from the
point of view of a student (at least a teacher who followed the course as a student).
The peer assessment assignment is organized as follows:
•
•
Each student has 7 days to write an short essay of few hundred words.
Each student has 7 days to assess 5 essays of peers and grade them on 0-3 scale with short free
text feedback on different rubrics. The assessment is done twice, the first time with updates
suggestions and the second one on the revised version for a final mark.
The author indicates that the process worked well for him. However, when wondering if he would
apply this model to his students the answer is: “I'd like to think so but I'm not sure. For one thing it's
not clear that our students are as confident or motivated as the participants in this course. For
another, there is the issue of marking cartels as students indulge in the prisoner's dilemma (as they
perceive it) with summative assessment. Sadly, I can't see a system like this being a goer for us.”
Case study 11: Peer feedback sessions Technology: Skillshare (@ skillshare.com)
Type of assignment: Video presentation of a project results
Type of assessment: Formative
From: http://moocnewsandreviews.com/massive-mooc-grading-problem-stanford-hci-group-tacklespeer-assessment/
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
34
This case study describes how an online course platform organizes a lightweight peer assessment
system based on “peer feedback sessions”4. Students can opt in joining peer feedback sessions. When
doing so, their projects are then submitted to two peers who will be able to submit constructive
feedbacks. In return, the peers’ projects are submitted to the student for the sale session. It initiates a
discussion between the reviewers and the reviewed.
Figure 7 – Student’s request to submit a project to peers
The author describes a session he has been participating to for a course on video production. He
mentions that the peer feedback process has involved him in diving deeper into the course.
Case study 12: Reliability and validity of web-­‐based portfolio peer assessment Technology: E-portfolio
Type of assignment: Project
Type of assessment: Formative/summative
From C.-C. Chang, K.-H. Tseng, P.-N. Chou, et Y.-H. Chen, « Reliability and validity of Web-based
portfolio peer assessment: A case study for a senior high school’s students taking computer course »,
Computers & Education, vol. 57, no 1, p. 1306-1316, août 2011.
This case study concerns a class of around 70 students who have to implement and present a project.
The assessment process is organized as follows:
1. Students are first provided with portfolio samples, assessment criteria and guidelines. The
rubrics of the assessment criteria are tailored according to students’ feedbacks.
2. Students have then to develop their portfolios, monitor peers’ portfolios and participate to
forums.
3. Students have finally to perform peer assessment. Peer assessment is anonymous and groupto-group. At the same stage, the teaching staff scores the portfolios.
The global result of this case study is the lack of reliability of portfolio peer assessment. The authors
identify the need to avoid or attenuate grading bias. They argue about the burdensome features of
portfolio assessment, particularly from the point of view of the teaching staff. They also suggest that
“advanced trainings and support so that students would be more likely to get involved in the
assessment process with proper abilities".
Case study 13: Teamwork skills assessment for cooperative learning Technology: Ad-hoc platform
Type of assignment: Group work
Type of assessment: Formative/summative
From P. S. Strom et R. D. Strom, « Teamwork skills assessment for cooperative learning », Educational
Research and Evaluation, vol. 17, no 4, p. 233-251, 2011
and
D. Brown, « Implementation of the Teamwork Skills Inventory among adolescents », 2010. [En ligne].
Disponible sur: http://hdl.handle.net/2286/9c2jizip6q5. [Consulté le: 08-mai-2013].
4
http://help.skillshare.com/customer/portal/articles/1104466-what-is-a-peer-feedback-session-
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
35
The TSI is a method based on peer and self-assessment to evaluate teamwork skills (currently,
25 skills are defined). It is an anonymous online assessment tool where students answer
questions regarding the individual contribution of each peer and then evaluate their own
contribution.
Teachers are expected to provide instructions and have discussion about teamwork, the
skills… The authors of the method have defined a 5 lessons curriculum to train and teach
students about teamwork peer assessment. Teachers have also to express their trust in the
fairness that students can achieve during the assessment.
Once the workgroup is achieved, each student assesses and marks each of the 25 skills for
himself/herself and peers.
Once the assessment is achieved, each student get a profile organized into two columns to
compare self-assessment with a aggregated view of peers’ assessments.
The method integrates features to attenuate over evaluation of peers: a warning pop-up
message when the maximum grade is provided for a skill and the inflation rating index that
indicates that a student needs additional guidance information for improving assessment.
For students, the method helps them to compare their self-evaluation to peers’ one. It also
helps them to improve their self- evaluation.
For teachers, the process allows evaluating weaknesses for individuals and groups in order to
adapt learning and evaluating teachers’ own skills for training students for work group.
The difficulties are related to the level of trust that teachers can provide to student to embed
them in the assessment process. Peer assessment is also a challenge for the teachers who need
to share with students the way he/she assess them. Time and efforts are required to setup
proper assignments. Peer assessment supports collaborative learning.
Case study 14: Facilitating peer and self-­‐assessment Technology: WebPA
Assignment type: Work group
Assessment type: Formative/summative
From
http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassess_assessingselfpeers.pdf
This case study describes how WebPA has been used in the Universities of Hull and
Lancaster. The platform has been applied for peer assessment of work group in many
disciplines, ranging from English to Civil Engineering. The functioning of the WebPA
platform has been described in the state of the art section of the document. Therefore we will
focus on the advantages, drawbacks and limitations identified in this case study.
The authors rise that tutors have not recorded any complaints for malpractice (which does not
mean that there were no problems, but that students did not report them). It seems critical to
take the time to explain and demonstrate in face to face: it indicates the importance that
teachers give to the process; it allows addressing basic questions; it avoids problems during
the assessment process. It is even suggested to improve students’ involvement by defining the
assessment criteria in collaboration with them.
Based on their experience, the authors indicate that students:
- acquire a greater sense of ownership and control over their learning
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
36
- work harder to get a successful assessment from their peers.
Peer assessment also favours dialogue and social interaction between students and can
therefore smooth newbie students’ integration.
For the teachers, there are obvious practical advantages over “paper”-based peer assessment:
it can be accessed from any place and at any time, the results are instantly and securely
collected. From a pedagogical viewpoint, it enables assessing skills that are normally difficult,
or even impossible, to assess.
Case study 15: Formative collaborative quiz with clickers Technology: Clicker; votamatic (votamatic.unige.ch)
Assignment type: Quiz
Assessment type: Collaborative/formative
Submitted by: [email protected]
This case study has been investigated at the University of Geneva in the context of a first year
bachelor course dedicated to multimedia technology with a class of approximately 120
students. Students are using their own devices: laptops, tablets or smartphone.
This assessment is performed in face to face at the beginning of a class. A few simple quiz
questions have been setup with the votamatic tool. Students have not been advised about the
assessment. They are explained about the goal of the assessment; they are told that their
answers are anonymous and that there will be no mark for this exercise. Students are then
requested to self-organize themselves in groups of 2 or 3 (but students who want to remain
alone are authorized to do so). They are given a simple URL to reach the quiz (without any
login or authentication process). The quiz can be accessed either on a laptop, tablet or
smartphone. By grouping students in teams of 2 or 3, there are enough available devices in the
class.
Students are given 15 to 20 minutes to answer the quiz. They can ask questions to the teacher,
discuss between them during this period. Once the period is over, the teacher stops the quiz
and the results are displayed on the screen and all students can view them at the same time on
their screen of their device.
During this last period, the teacher goes through each question, discusses the results and
explains the answers.
For the teacher, it is a good way to evaluate the global level of the class and identify
weaknesses. The exercise initiates a discussion between the teacher and the class and among
students. As the answers are anonymous, students are comfortable to participate and do not
express any reluctance to participate. The assessment tool is lightweight and easy to use for
the teacher and the student..
The whole exercise takes around 45 to 60 minutes, but it is possible to reduce the time by
submitting the quiz between two classes and only discuss the results during the class.
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
37
Figure 8 – a snapshot of a quiz
Figure 9 – A snapshot of the display of the quiz results
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
38
Case study 16: Gamified work group assessment Technology: User points; elgg (hec-onnect.unige.ch)
Assignment type: Project
Assessment type: Formative/summative
Submitted by: [email protected], [email protected]
This case study has been investigated at the University of Geneva in the context of a first year
bachelor course dedicated to an introduction to web services with a class of approximately
300 to 400 students. Gamification is one among various approaches that are applied to engage
and organize participation. It consists in introducing game mechanics in non-game contexts.
The main objective is to increase the user engagement. This approach has raised a lot of
interest and development in education with the expectation to improve students’ engagement
in learning activities.
One of the techniques involved in gamification is based on user’s reward. The reward is
usually based on a score that the user is earning throughout his/her interactions with the
system. Whenever the user is acting positively his/her score is increasing. Once the score
reaches a pre-defined threshold, the user is getting a reward (a badge that is displayed on
his/her profile for example). The basic idea consists in adapting the user points approach in
order to estimate students’ individual contribution to the global effort. At the end of the group
work, students’ scores are used to assign a mark that is then integrated to compute the final
mark.
Group work is supported with an online shared workspaces platform. The platform is used for
collaborative learning so that students can tutor their peers and provide them feedback during
the group work project. The tutoring can apply to the activities of the work group assignment
but also to the technical and organisational skills required to use the collaborative platform.
We consider each activity that a student can have with the platform and evaluate it according
to its contribution to the increasing of the global knowledge of the whole class. A student who
publishes a public bookmark is considered as being willing to share a resource with the peers.
A student who comments a content produced by another student is considered as being
willing to provide a feedback to the peers. These two activities will be positively rewarded.
We do not evaluate the quality of the production. Only the intention to contribute is rewarded.
We are aware that we may reward “useless” contributions. Our policy is to favour
contributions by considering that learning students are not systematically able to perform
efficiently from the beginning of the group work. The process takes also into consideration
the actions that students can perform to increase their own knowledge. For example, when a
student reads a content produced by a peer, we consider that the student is willing to learn
from others. Therefore, his/her score will increase. A pre-defined ranking of all the possible
actions is established. The ranking is defined according to the weight of the contribution that a
given action may have to the global knowledge. Sharing a bookmark will be for example
considered as a less significant contribution than commenting a content. The amount of points
a student can earn for a given action is depending on the rank of the action. The teacher can
monitor the assignment of the user points at any time and get the final amount for each
student. He/She can then define by himself/herself how to integrate this scoring of the
student’s individual contribution in the final mark of the group work. Students are made
aware about the fact that their individual contribution and support to the global platform
knowledge is evaluated.
The collaborative platform is developed with the open source Elgg social network engine. The
core engine is augmented with various plugins. Shared workspaces are defined as groups.
Each group has its own workspace and toolbox (the toolbox integrates wiki, blog, forums,
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
39
question/answer, brainstorming tool…). Professors, teaching assistants and staff as well as
students are given the same rights on the platform. They can for example create a group for
formal or informal learning activities. A gamification plugin has also been partly integrated.
The user points system is activated whereas the badges system is disabled. Students cannot
access their score.
The platform has been used since 2010 for a 1st year bachelor course in Information Systems
for students in commercial and management studies. Every year, the class varies between 300
to 400 students. They have to work in groups for the project semester. The project is
organized into multiple phases. For each phase they have to produce outputs that are
increasing in complexity. During the project they are continuously provided with resources
and guidelines (online and face to face) so that they can gradually learn to use the platform
and tools, and get used to collaborate. The final mark is computed from the individual
contribution score and the evaluation of the final group production. The individual
contribution is estimated by defining ranges of user points. The ranges correspond to different
levels of contributions from inactive to very active. For each student the individual mark is
assigned according to the user points range in which his/her score stands. Therefore, the
students in the same group may receive different final project marks.
We have already raised the issue of useless contributions with the risk of rewarding them
unfairly. From our experience, we have noticed that if we provide students with differentiated
types of content, it is possible to discriminate and orient low-level contributions. For example
introducing a shoutbox allows gathering most of the “logistics” messages (such as “where do
we meet?”). Moreover, by assigning individual contribution marks according to pre-defined
ranges of user points, we avoid fostering students who are over-contributing.
The resulting collaborative learning platform encourages students to contribute and
collaborate. It addresses the “free rider” problem by providing an indicator of the student’s
individual contribution. This indicator allows defining a mark that can be taken into
consideration for the final mark. Further developments include the refinement of the rules to
assign user points and the introduction and evaluation of intra and inter-group peer
assessment. The refinement of the user points rules is expected to bring a better estimation of
students’ individual participation. The rules to define the ranges of user points to assign marks
can also probably be enhanced. The intra-group peer assessment is expected to adjust the
individual contribution score with the evaluation from the peers. The inter-group peer
assessment is expected to adjust the global group contribution.
Case study 17: Portfolio-­‐based collaborative assessment Technology: Blog/Portfolio
Assignment type: Project
Assessment type: collaborative/formative
Submitted by: [email protected], [email protected]
At the University of Geneva, physical education is currently taught in dual system: students
share their studies between theory at the University and practice in schools. During the theory
periods, student trainees are taught by university trainers and are organized in classes. During
the practice periods, student trainees stay in primary and secondary schools, they are
supervised by field trainers and are organized in binome teams. During their stay in schools,
student trainees have to prepare lessons and apply them with classes under the supervision of
field trainers (field trainers are themselves physical education teachers).
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
40
This combined peer-tutoring (where student trainees teach to their peers) and peer-assessment
(where student trainees assess their peers) approach allows increasing the academic gain for
both tutors and tutees.
The main issues are: the number of categories of participants involved; the lack of continuity
and contacts between the participants (Figure 10.a). This lack of contact does not only affect
students and trainers. It also concerns university trainers and field trainers. The objective of
this project is to introduce distance learning technology in order to keep the participants
connected and stimulate exchanges and feedbacks. The selected approach consists in
organizing participants’ interactions around student trainees’ activities with e-portfolios (Figure
10.b).
According to the specific context induced by the dual education system, the training platform
must be at the same time:
- A common place where the different categories of participants: student trainees,
university trainers, field trainers can continuously exchange, harmonize and converge.
- A place where student trainees can depict their activities, get feedbacks, monitor their
progress and be evaluated.
The evaluation aspect is particularly important. Trainees feel that the creation of an eportfolio content is more of a process that is required rather than a product that can
demonstrate the development of professional growth. They usually feel this last stage at the
end, once they can browse their portfolio content. Therefore, including the portfolio content in
the evaluation creates an initial constraint to engage the trainees in the production of content
for the portfolio.
schools
university
Student trainee
Student trainee
E-portofolio
University trainer
University trainer
Field trainer
Field trainer
(a)
(b)
Figure 10 - E-portfolio as a virtual common shared space to overcome dual system barriers
Another important issue is that the structure of the platform needs to reflect the structure of
the pedagogical organization: classes, binomial teams and students. It must also reflect the
roles of the different participants in the pedagogical organization, particularly in terms of
interactions such as feedbacks from the trainers to the trainees.
We consider three levels corresponding to the three levels of integration of student trainees: 1)
individual level (for individual progress and evaluation) 2) binomial team level (for coelaboration, feedback and evaluation) and 3) class level (for global management and
information and theoretical material dissemination). The levels are organized hierarchically so
that when a student trainee submit some contribution at the binomial level, it also appears at
the individual level (so that he/she can monitor his/her own progress), but does not appear at
the class level as it does not correspond to that level (however, the contribution can be
reviewed by other student trainees as all published contribution are made public to all users).
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
41
Implementing such a platform requires taking users skills into consideration regarding the
design. The first constraint is that none of the users of the platform is particularly skilled in
information technology. Therefore, the platform should not be overloaded with
functionalities. It is of course not possible to avoid the additional workload induced by the
need to master the platform. However there are few design rules that can be applied to make it
simpler.
We have devised the following approach:
-Rule 1: provide users with only the features that are required, so that they do not get lost with
too many options that they have to test and acquire. Each role is clearly identified and gives
access to the required features. For example, field trainers do not need to any content other
than feedbacks. Therefore, they have no personal blog and are only limited to produce
comments.
-Rule 2: provide tools that are similar to the ones that users may use in their personal practice
of information technologies. For example, the students’ e-portfolio is a blog, which may be
already familiar to some of the student trainees.
Our main objective at the implementation level is to reflect the same structure as the one
developed at the pedagogy level. Figure 11 describes the global architecture organized around
the student trainees’ e-portfolios. The class organization is reflected through the use of
groups: a student trainee is an individual user with his/her own blog. He/she is member of a
binomial team, which is also equipped with a blog. And finally each binomial team is a
subgroup of the class group. This structure ensures the dissemination of contents among the
appropriate levels. Depending on at which level a post is submitted, it appears in different
blogs levels (posts always appear in the individual blog of their author).
Figure 11 - Global overview of the platform structure and organization
We also assign a role to each participant (University trainer, Field trainer, Student trainee).
Each role is assigned with some rights to produce contents (corresponding to their possible
interactions with the platform and the other roles). For example, student trainees need to
submit lessons preparation and practice reports, so they are assigned with a blog and ability to
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
42
submit blog posts. They are also expected to provide feedbacks to their peers, so they are
assigned with the ability to submit comment. They are expected to communicate with trainers,
so they are assigned with the ability to submit forum topics and posts.
Case study 17: Acadima Technology: Dedicated platform
Assignment type: Flashcards
Assessment type: Formative
Submitted by: [email protected]
Acadima - http://www.acadima-information.ch
Acadima provides university students throughout Switzerland with the opportunity to write,
edit and share both learning and test flashcards targeted on their modules and final exams.
Students can enhance the quality of the cards by providing mutual feedback and quality
assessments. In this way, a dynamic question pool is developed that can be readily retrieved
by students via their AAI access. The flashcards can be called up on smartphones or via a web
interface.
The skills developed include: - self-organized learning (cooperative, reflexive, motivational,
emotional, cognitive --- self competence) - knowledge about how to write good MC-questions
--- competence in didactics - peer review --- competence in didactics --- discovering miss
concepts - competence in media (critic, skilful, usage, design)
The technologies components involve: - collaborative question pool for exam preparation
(card status: published - visible for peers) - peer review - "reviewed" flag (through feedback) prof review - "profproofed" flag - peer voting (card quality voting, card level of difficulty
voting) - feedback - gamification (boost your peers), push mechanism - mobile learning.
The workflow has to be simple and clear. Students need to see immediately their data. Simple
login such as AAI. No advertisement and cost free. Sharing of small amount of data for exam
preparation (e.g. learning and test flashcards). Providing feedback features (feedback, voting);
building of communities; gamification; freedom equality; trust; community; collaboration and
usability.
The main goal is not the provision of learning and test flashcards as such, but the contents,
which are created and made available by students. Students can enhance the quality of the
cards by providing mutual feedback and quality assessment. In this way, a dynamic question
pool is developed that can be readily retrieved by students via their AAI access. This will
motivate students to work together with each other, such as through the option of marking
cards as favorites, rating them as excellent or criticising them. This serves to increase the
quality of the cards, because together we are wiser. Acadima promotes two fundamental
cooperative processes. On the one hand, cards can be compiled on an altruistic basis for
others to use as well and, on the other hand, students can work together on a joint basis. A
shared benefit results. People are naturally disposed to cooperate, to exchange information
and tasks, and to share their aims. Acadima can be used by university students and teachers
for creating and working with flashcards as well as for sharing them with their fellow
students. Acadima has been designed in cooperation with universities and students. It is an
optimized form of learning and retaining new information. By sharing with others, student
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
43
can be saved from having to create their own sets of flashcards. Acadima enriches learning,
teaching and campus life not only for students but also for individual faculties and the wider
community. It enables teacher to engage more students in exciting new way, reaching them on
their terms and via their devices, and keeping students both informed and involved.
An attractive aspect of Acadima is the link it provides between different universities and the
resultant openness. Individual cards can, and ought to be swapped for each other, and it is
possible to collect useful cards in personalised card collections. The heterogeneous nature of
the subject matter becomes clear, and new facts can be linked together. Explorative learners
can move forward into previously unknown areas. The focus is on the contents. It would thus
be possible to refer to Acadima as "crowd-sourcing for the crowd". Acadima is a
straightforward knowledge-imparting tool for teachers and students that is fun to use. We
want to tie Acadima into a wider solution, since, together; we can create a meaningful
application. Winning applications have to be sufficiently unique. We can attain this goal
through the uniqueness of Switzerland's university network.
Since 2011, we build student expert groups for the modules of the basic studies curriculum in
biology, Division of Biology at University of Zurich. Students of higher semesters such as the
advanced studies curriculum join these groups as reviewer. In the end we build up to 20
learning communities covering the main topics of the basic studies curriculum. Their task is
to create meaningful learning and test flashcards for exam preparation. In a didactic course
they learnt to design good MC questions. Other students can profit of the work of their fellow
students and at the same time contribute with annotations, feedbacks and ratings. Lecturers
can set 'profproofed' icons.
Learning Infrastructure, WP7.1, D7.1.2, UNIGE
44