Comments from Brian

Fragment of a discussion from Talk:Peer Evaluation
Jump to: navigation, search

Thanks for posting the link to that paper!

As a technician, it suggests several elements that need consideration:

  • having students review submissions that have been reviewed by "experts" (ground truth) which is a variation on Mika's comment about a library of sample works
  • partitioning reviewers by native language in an attempt to remove that bias
  • recording "time spent grading" a submission is challenging in a distributed environment like the OERu courses that have been offered to date
    • (Their "sweet spot" of 20 minutes spent grading an assignment sounds like a significant time commitment for our mOOC assignments.)
  • if karma is used, it maybe necessary to factor the marks an evaluator has received, not just those he has given (and had commented on)
  • a large discrepancy in scores might signal the need to add additional reviewers of a particular submission
  • how to present scores in a meaningful way especially if there are different weights being applied, or some evaluations are discarded, etc. in an environment where individual evaluations are open
JimTittsler (talk)19:21, 19 May 2014