Peer evaluation
I have given a bit of a thought on the peer evaluation workflow.
- The peer evaluation dashboard will be very similar to the Activity register for a course, like http://wikieducator.org/Open_content_licensing_for_educators/E-Activity_register but with the following changes:
- Only posts for an activity for which he has submitted himself can be seen to him.
- Only posts which have less than a fixed number of reviews will be visible in this dashboard, for example a course can be set as each activity requiring about 2-5 peer reviews. So, only the posts with less than 5 reviews will be displayed in this dashboard.
- Posts will be sorted according to the number of peer reviews, and then newest first, i.e, the lastest posts with no peer evaluations yet will appear higher in the list.
- The table will look like https://www.dropbox.com/s/vkmvxj4kx73ekjd/IMG_20140308_221215.jpg (a very rough diagram.)
- The user can then click on a particular row to go to the peer evaluation page of that post.
- It will contain a link to the actual blog, where the learner can go and have a look at it.
- It will look something like https://www.dropbox.com/s/itggpqb7z8hmzmp/IMG_20140308_221558.jpg (a very rough diagram.)
- Now, the course can set a minimum number of posts that each user has to review for each activity,say 3 or 4,this will not only serve the purpose of evaluation but will increase learning as learners will be reading the posts of their peers, without taking away to freedom of which posts they are to review. They would get some credit for each post they review.
The above workflow assures that all posts would be peer reviewed as well as the learner will have freedom to choose the posts they would like to read from the ones which have not yet been evaluated.
Because we don't have a clear way to tie together a user and a natural person, self-assignment allows the creation of sock-puppets for grading or favorably marking friends. At the same time it allows smaller self-organized cohorts to work within a larger class. (My own experience in MOOCs suggests the latter is helpful and should be encouraged.)
I actually think all the submissions should be visible, with those needing review being highlighted. I don't want to prevent a motivated student from reading everything (not that he is likely to review everything).
It would be important to record the time of the review (since remote content could change after review).
I think some sort of peer review is important to the Academic Volunteers International scheme. It also suggests there might be multiple types of reviewers, possibly categories like: current student, student that has completed the course, tutor, community volunteer.