Peer Evaluation
Contents
Introduction
Peer evaluation provides a scalable solution for assessment of activities. In the process learners are asked to submit their work and then evaluate the work of their peers. It is generally facilitated through the use of rubrics. It is of vital importance in courses where there are a large number of learners and manual grading by instructors is not possible. Wikieducator Peer Evaluation is a minimalistic tool that can be used for student/learner peer review and self evaluation. It can be integrated to wiki content or can be used as a standalone tool. It is very simple to set up the tool for customized rubrics.
|
This project was started off as part of Google Summer of Code (GSoC) 2014 for the Open Education Resource Foundation supporting the WikiEducator and OERu initiatives.
Current iteration of the tool
The current iteration of the tool is a mediawiki extension that empowers a wiki to use peer evaluation.
The WikiEducator Peer evaluation extension is a minimalistic tool that lets one use peer evaluation and self evaluation techniques for wiki based content. It was designed to be used as part of courses, that have activities, which require blog submissions, but is generic enough to be used for other types of content and can be also used for peer review. Unlike other Peer Evaluation tools it is very simple to setup and use. The user interface is clean and the tools can be integrated to existing pages in the wiki. The tool provides the basic functionality of getting something evaluated or reviewed by peers without complicating the process.
Some use cases of this tool:
- WikiEducator based courses like the OCL4Ed course and the AST1000 course.
- Getting targeted feedback on content. Currently for getting feedback for content in the wiki we can use discussion/talk pages. But, using this method it is very difficult to tell the reviewers what exactly does the content author wants to know. For, the reviewers too there is no way to think about what would be a useful feedback for the reviewer. The Peer evaluation tool lets one ask for targeted feedback based on a set of rubrics. example at WikiEducator Beta
The Quick start guide provides instruction for one to get started using it. It also contains an example based on assignment 1 of the wikieducator AST1000 course
Note: The tool is currently deployed at a Beta wikieducator installation ( Details ). But, it can be used in any mediawiki instance where the Peer evaluation extension is installed. You may read the installation instruction to do the installation.
Prototype testing during Open content licensing for educators course
The first iteration of the peer evaluation tool was tested during a snapshot based Open content licensing for educators course (OCL4Ed 14.06 course) . This was a basic and minimal version of the tool that helped us gather feedback for the next iteration.
Link to the peer evaluation page in the course.
The prototype received 2539 overall pageviews, 63 evaluations and 25 activity submissions.
Prototype question examples
- Copyright MCQ e-learning activity - Example based on MCQ activity from OCL4Ed to trial an custom evaluation rubric with weightings and objective responses.
- Learning reflection example - Example using an alternate rubric approach. Learning reflections are personal, so harder to define specific criteria for evaluation.
- Creative commons remix activity - Another example with a custom assessment rubric.
Observations and reflections derived from prototype testing
- Criterion referenced evaluations (eg minimum requirements for a "complete" post) should not be conflated with norm-referenced evaluations. Recommendation - develop a criterion referenced option as a separate component of the peer rating system.
- The "Not achieved"; "Achieved"; and "Merrit" framework works well, but is better suited to criterion referenced evaluations. Recommendation: Restrict this typology for use of the "Completion" criterion.
- Evaluation rubrics are too complex and perhaps a little daunting for respondents who are required to read too much when providing a rating. Recommendation: simplify the rubrics.
- The point system approach trialled with the Learning Reflection rubrics looks good in theory, but when converted into percentage scores does not provide a fair and reasonable summative grade for responses. Recommendation: Drop the point system approach.
- The ten-point scale is too complex when working with detailed rubrics. Recommendation simply the rating model using a 5 star rating system, which also has advantages for the UI using a familiar rating convention of the web.
- In theory, it was a good idea to provide users the option to comment on every criterion - but in practice, this does not work well. Recommendation keep the comment field for "Does this post relate to the question" items and only use one overall comment text area for all the evaluation items.
Challenges
- Peer evaluations may cause learners to highly rate their friends and give low grades to others. They may form small groups within themselves and try to grade only among themselves. Also, some learners may have the tendency to give the same grades to others. These can be significantly reduced by randomly assigning the learners which peers would have to grade. In the prototype model we will also try to remove this by not considering the very high or very low grades.
- Peer evaluations would strictly require deadlines to be met, both for submitting the activities (so that they are available for assessments by others) as well as for the evaluation itself.
- There may be cases where learners do not review the assigned posts. The system where there is some part of the grade for the evaluation itself as well will reduce it a lot.
Next Steps and Ideas Being Explored
- A feedback and rating system in place for the evaluations itself.
- Rather than the plan suggested above, a calibration algorithm which calibrates grades based on some of the gradings by instructors. This may be done by a sample evaluation task before learners actually evaluate the assignments.
- Measurement of the reliability of a students grades as a course progresses and perhaps universally across all courses offered.
References
Some ideas have been derived from:
- The Moodle Workshop module
- Tuned Models of Peer Assessment in MOOCs
- Suggestions by Wayne Mackintosh and Brian Mulligan
- Student peer review
Additional resource links
- Coursera - How peer assessments work.
- Generally peer assessments are not allowed until evaluation phase begins after the submission deadline. (At OERu, once a learner has opted-in for a peer assessment, they could be added to the pool for evaluation. Perhaps the Peer evaluation tool could provide an option for date bound start of the evaluation in addition to allowing evaluations once the learner has opted in for evaluation.)
- I like the optional "learn to evaluate" feature where peer ratings of an example assignment are compared with a teacher evaluated assignment.
- Reflections from Chuck Severance on Coursera Rubric - A useful approach for setting up rubrics.
- e/merge Afric MOOC Study Group discussions - specifically Peer assessing. The study group are participating in the Learn To Teach Online MOOC, and having a meta discussion about their experiences as learners as a follow on the the e/merge Africa course.
- Course Builder Peer Review
- Video explaining Peer Review Feature: http://www.youtube.com/watch?v=5ERlbCXAkDg