Rubrics for Peer Evaluation

Fragment of a discussion from Talk:Peer Evaluation
Jump to: navigation, search

Hi Sarah,

Appreciate your feedback.

Output guidelines like: "What happened? Action: what did you do? Reflection: what happened next? where you effective? did you get the result you intended? how did you handle any unexpected outcomes? are extremely valuable to guide and support learners with preparing learning reflections, but I don't think they're useful as evaluation criteria.

It would difficult for a peer to reliably evaluate a learner response to "did you get the result you intended?"

You're right - the design of the task is critically important - but that's something the course developer / designer is responsible for and difficult to integrate into the peer evaluation technology engine. If academics design unreliable evaluation criteria, the peer evaluation outputs will be unreliable.

Agreed - the PROCESS of the task is more important than the outcome -- but harder to evaluate from a peer evaluation perspective. I think we need to be realistic and not expect a peer evaluation tool to evaluate the process. The peer evaluators will see the outputs of the process -- not the process itself. Candidly, we can only ask peers to evaluate what they can realistically observe.

As Akash has indicated - he is planning to incorporate an "appeal" feature where the learner can flag evaluations which they believe are not fair or accurate. I think it would be useful to incorporate a text field for the learner to state why the evaluation is not a fair an accurate representation of their work.

The design of the underlying mathematical model to deal with all the dimensions of reliability and excluding problematic scores etc is a complex challenge - particularly when dealing with small cohort enrolments because inferential models would not be reliable in this context.

My own feeling is that the design needs to be incremental and realistic. The GSoC project is working to a tight deadline - getting the basic functionality of the peer evaluation technology is more important than refining the mathematical model as a first step. If learners are able to flag questionable evaluations - that's sufficient for the first iteration in my view. Perfecting the mathematical model is the next incremental step.

Mackiwg (talk)23:32, 3 June 2014