Thread:Feedback on user interface Mockups (1)

Akash, thanks for your effort in preparing the interface Mockups - this is extremely valuable for educators (like myself) to think systematically about the design of the peer evaluation system.

Mockup for submitting / registering an assignment


 * 1) In the OERu nomenclature, it is better to use the concept "E-learning Activity" (rather than assignment). The concept assignment is typically used for summative assessment, that is the assignments learners are required to submit for formal academic credit.
 * 2) We need to think about the behaviour in cases where a user does not have a current WikiEducator session - for example a message and link to the user that they must login.
 * 3) Drawing on my experience from the OCL4Ed courses, a major challenge with inexperienced blog users is that they register the wrong url (for example providing the link to the homepage rather than the individual blog post or providing the link to the editors view rather than the published blog post.)  The design could "automate" some checks to minimise these errors, for example:
 * 4) Requiring users to add a unique tag to the post - if the tag is identified, it could pre-populate the url.
 * 5) The URL field could filter for known errors (eg link to home page or edit view of known blog links.)
 * 6) Would it be useful / appropriate to incorporate some feature for the user to "check" the link before checking the "Opt in for evaluation" button.
 * 7) I assume the agree to Peer Evaluation link will document the "terms of service and conditions" for the peer evaluation system. We need to cover a number of legal aspects here.
 * 8) Do we need to cater for an option for learners to "opt-out" of peer evaluation at any time? My gut feel is that this should be an option as in the case of providing user autonomy to unsubscribe for an email list.
 * 9) I think that it's advisable for us to incorporate a self-evaluation option in the system. If a submission is flagged as a self evaluation item,  would it be appropriate to trigger the self-evaluation items immediately after the user has submitted to opt-in to the evaluation?
 * 10) This mockup is a good example of a mix of activities. For example, we would not expect a value judgement on a personal learning reflection, so the evaluation items would be restricted to yes / no validations. Conversely, number activities in the OCL4Ed example would present opportunities for qualitative value judgements. I will make a selection of OCL4Ed activities and develop rubrics which we can use for prototyping.

Screen listing assigned evaluations


 * 1) Would it be useful to include the due-date and time for the evaluations -- with automated conversion for local time zone. - For example "Please evaluate the following submissions for 2nd learning reflection by 15 June 2014 at ."

Evaluation submission screen


 * 1) This is a proverbial debate among educators in the wiki community. Users with limited experience in browsing online content frequently get lost (eg they forget that they have a back button on the browser ;-)) Some learning designers would recommend that the link to the post to be evaluated should open in a new window or tab so that the evaluation form remains open. Something for us to think about - I'm not sure what the best solution is.
 * 2) Regarding the validation item (Is the content of the post in response to the activity concerned):
 * 3) * I would consider adding an optional comment field for the submitter to indicate a the reason why it is not related, eg incorrect url.
 * 4) * If the user answers No to the first validation question, the remaining items are redundant. Should we consider a behaviour where the remaining evaluation items only appear after the post is validated as a response to the activity?
 * 5) In cases which are supported with a rubric, we need to provide a link to the evaluation rubric which specifies the criteria.
 * 6) To improve completion of all the fields for a "valid record" I would suggest making the evaluation items required fields (comments optional). If not -- we will need to think about what constitutes a valid evaluation.

Do we need Mockup screens for:


 * 1) The learner's view of the ratings and comments from peers
 * 2) * An issue which will require considerable debate and discussion is the transparency of ratings. That is: should the evaluee see the identity of the evaluator?, should submissions remain anonymous and the user is provided with aggregated scores? etc. I think the best strategy is to prototype and get feedback from real learners using an online survey.
 * 3) * Do we provide the capability for the evaluee to flag "spam" comments -- this has been a big problem with some of the xMOOC experiences.

This is progressing well. The mockups have certainly helped us think about prospective issues and hopefully we can get a few decisions before delving into the code - or at least designing the code engine in a way that is flexible enough to tweak as we gain a better understanding of peer evaluation in the OERu context.