Rubrics for Peer Evaluation

Jump to: navigation, search

Great start. The screen mockups really help.

Peer Evaluations need more guidance.

  • Including a simple rubric with 3-4 criteria and 3-4 descriptions of completion should be accommodated even if the instructor can elect not to use them.
  • The instructor should be able to define the rubric. Having some pre-made rubrics would demonstrate good practice.
  • The evaluation form could/should be integrated into the response - not a separate screen.

Peer Evaluation is a learning opportunity for the evaluator and the instructor as well as the learner whose work is being reviewed.


Peer groups - some courses are using peer groups or teams, either learner-formed or assigned. Then the evaluations are within the peer group. There is more likelihood that the identity of the evaluator would be important and visible.

Vtaylor (talk)23:24, 20 May 2014

Thank you for your views on rubrics. It is true that Peer Evaluation needs more guidance. I will include these in the first prototype version.

  • Although Peer Groups would promote learning and collaboration, it may not be ideal for evaluation. Learners within a peer group may have a tendency to give the same kind of grades to everyone, for example, everyone part of one particular group may be friends and give everyone else very high grades.
Akash Agarwal (talk)19:47, 21 May 2014
 

I have had excellent experiences with peer evaluation and believe that if the task to be marked in this way is designed with the peer evaluation in mind you can have great outcomes. I think that personal reflective assignments are very important and relevant to learning and there is no problem with using peer evaluation so long as the reflection is structured more clearly. The reflection task example used to guide this project could - i think - be improved - rather than one blanket open ended question, it needs to be broken down into smaller pieces with rubric/marks to be allocated for each by the peer evaluator. For example, we have used the CARL framework for some time (and i cannot put my finger of the academic reference for this right this moment!) but it is eg: Context: what happened? Action: what did you do? Reflection: what happened next? where you effective? did you get the result you intended? how did you handle any unexpected outcomes? Learning: insight into your strengths and weaknesses and plans for action emerging from this learning experience

As you can imagine, each of these questions above has the potential to have a sliding scale of marks assigned to them via a rubric which provides guidance and reduces the need for subjective judgement on the success of the task/learning.

In summary - i think the design of the task is very important, and it can make or break peer evaluation. In designing tools for it i think we need to have model assignments and model criteria/rubrics that go with it.

On a slightly different tack - relating to the distribution of marks. I like to have a mix of criteria/marks for outcome AND for process.Sometimes students work hard but still don't get a good outcome. Sometime students wing it with a pretty loose process but know enough to get a good outcome. If we want to help students establish good life-long learning skills and self-directed learning skills then i think it is useful to have a series of questions/criteria/marks focussing on the PROCESS of the task rather than just on the outcome. For example you can ask students to submit information about the "how" of their process eg, did you allow yourself sufficient time to undertake this task? how many sources of information did you consider? If you got stuck, did you seek help and if so with whom? What feedback did you get from your co-learners and how did you incorporate this into your work? How does this learning relate to your prior experience at work or in your personal life? Extra marks can be allocated in the rubrics by peer evaluators relatively simply.

In regards to variation in marks - the kinds of methods discussed seem reasonable. I am very keen on the rating the feedback mechanism however. One common frustration from students feeling hard done by peer marking is "i feel like they did not read my submission properly. they said i didn't address X but it was right there on page Y". It would be great if those receiving feedback could rate the feedback and if a summary/aggreate of that was sent back to the evaluator. It might make these students allocate a little more time and care to their evaluations. If we have evaluators who are regularly getting low scores - could we remove them from the pool of assessors? Similarly, if a student had more than one unfair pieces of feebdack could they request via the system to have an additional evaluator assigned instead?

Slambert (talk)19:23, 2 June 2014

Thanks a lot for taking the time to share your valuable experiences and thoughts. These will help in taking the project a step further in the right direction.

Regarding the 'rating the feedback mechanism', we do plan to have a system which tries to judge the credibility of the students with regards to the accuracy of their evaluations, in addition to letting them provide the feedback. Not only, do we plan to let the learners give feedback on the evaluations and flag them if they think it to be incorrect or improperly done, we are also thinking of having a karma system where the credability of the evaluaters is measured over time and then we could have things like the weightage of the evaluations being distubuted on the basis of this. For example, a evaluation by someone who is evaluating the same person every time will be given less importance than another who tends to evaluate a wider section of the learners. This system will provide the learners an extra incentive to do the peer evaluation task the right way and also a mechanism to detect learners who try to take advantage of the system.

Akash Agarwal (talk)20:25, 2 June 2014
 

Hi Sarah,

Appreciate your feedback.

Output guidelines like: "What happened? Action: what did you do? Reflection: what happened next? where you effective? did you get the result you intended? how did you handle any unexpected outcomes? are extremely valuable to guide and support learners with preparing learning reflections, but I don't think they're useful as evaluation criteria.

It would difficult for a peer to reliably evaluate a learner response to "did you get the result you intended?"

You're right - the design of the task is critically important - but that's something the course developer / designer is responsible for and difficult to integrate into the peer evaluation technology engine. If academics design unreliable evaluation criteria, the peer evaluation outputs will be unreliable.

Agreed - the PROCESS of the task is more important than the outcome -- but harder to evaluate from a peer evaluation perspective. I think we need to be realistic and not expect a peer evaluation tool to evaluate the process. The peer evaluators will see the outputs of the process -- not the process itself. Candidly, we can only ask peers to evaluate what they can realistically observe.

As Akash has indicated - he is planning to incorporate an "appeal" feature where the learner can flag evaluations which they believe are not fair or accurate. I think it would be useful to incorporate a text field for the learner to state why the evaluation is not a fair an accurate representation of their work.

The design of the underlying mathematical model to deal with all the dimensions of reliability and excluding problematic scores etc is a complex challenge - particularly when dealing with small cohort enrolments because inferential models would not be reliable in this context.

My own feeling is that the design needs to be incremental and realistic. The GSoC project is working to a tight deadline - getting the basic functionality of the peer evaluation technology is more important than refining the mathematical model as a first step. If learners are able to flag questionable evaluations - that's sufficient for the first iteration in my view. Perfecting the mathematical model is the next incremental step.

Mackiwg (talk)23:32, 3 June 2014
 

Hi Valerie,

We have developed prototype rubric examples which accommodate customisable and multiple criteria with corresponding descriptions for the different grade levels.

Would you be in a position to provide additional examples with corresponding rubrics to assist with the design - -that would help us tremendously.

Agreed - where possible integrating the criterion descriptors and requirements for each grade level in the response form would be ideal.

Appreciate the feedback - thanks!

Mackiwg (talk)23:09, 3 June 2014