Jump to: navigation, search
Edited by author.
Last edit: 18:53, 19 May 2014

It would help my understanding of the possible technical solutions to understand the pedagogical goals of "peer evaluation" in the OERu.

  1. Are educators interested in the scores given to other students? The comments?
  2. Is the goal to drive engagement, by almost forcing students to read and respond to other students?
    1. And if so, do they expect the student to revise a particular submission prior to the end of the course?
  3. How should evaluations done by people outside the student cohort be weighted?
    1. If this is part of the Academic Volunteers International initiative, how is the "community" valued?
    2. If a former student of a particular course reviews current work, how is their opinion valued? (And does karma carry over?)
  4. Is participating as an evaluator ever a requirement for completing the course? (certificate of completion? being granted credit by a partner institution?)
  5. Is it useful to be able to report instances of suspected plagiarism?
JimTittsler (talk)17:53, 19 May 2014

A few thoughts relating to the pedagogical aspects raised by Jim above:

  1. Generally speaking, I think the learner (Evaluee) is more interested in the scores and comments than the educator for formative and learning support reasons. For courses operating at scale there may be too many ratings for educators to consider meaningfully. I do think educators would be interested in aggregated results eg number of ratings submitted, average ratings etc. This data would also be of interest to the learner group, so we need to think about how aggregated results are reported. A "live" feed of aggregated stats would be a nice feature. I can imagine scenarios where educators would be interested in the scores, eg a) Dealing with student appeals where they disagree with the ratings in cases where the quantum of the evaluation contributes to final achievement score. b) Cases where the system flags ratings deemed to be questionable.
  2. From my perspective, the goal is to offer a range of options in OERu courses to improve the learning experience, taking into account that OERu learners participate for a range of different reasons. I don't think that learners who are popping in out of self interest should be "forced" to participate - peer evaluation should be an "opt in" component of the types of assessment and certification. For example, learners interested in receiving certification for participation could be required to participate.
  3. Ideally, I would like to see a system which can accommodate evaluations done by individuals outside of the current course. Perhaps the system needs to identify "assigned raters" - those which the system assigns for the evaluation and "non-assigned raters" - then we can decide at a later stage how we implement or recognise these ratings.
  4. I would recommend that the system flags an option whereby participating as an evaluator is a participation requirement for forms of credentialing. There may be instances where courses may not require participation as evaluator as a requirement.
  5. I think it would be valuable to flag instances of suspected plagiarism -- however, we need to think carefully about privacy rights - particularly in cases where the alleged plagiarism is not validated. If we incorporate this kind of assessment -- my gut feel is that it needs to be confidential.
Mackiwg (talk)09:56, 20 May 2014