Feedback on user interface Mockups

Jump to: navigation, search

Akash, thanks for your effort in preparing the interface Mockups - this is extremely valuable for educators (like myself) to think systematically about the design of the peer evaluation system.

Mockup for submitting / registering an assignment

  1. In the OERu nomenclature, it is better to use the concept "E-learning Activity" (rather than assignment). The concept assignment is typically used for summative assessment, that is the assignments learners are required to submit for formal academic credit.
  2. We need to think about the behaviour in cases where a user does not have a current WikiEducator session - for example a message and link to the user that they must login.
  3. Drawing on my experience from the OCL4Ed courses, a major challenge with inexperienced blog users is that they register the wrong url (for example providing the link to the homepage rather than the individual blog post or providing the link to the editors view rather than the published blog post.) The design could "automate" some checks to minimise these errors, for example:
    1. Requiring users to add a unique tag to the post - if the tag is identified, it could pre-populate the url.
    2. The URL field could filter for known errors (eg link to home page or edit view of known blog links.)
    3. Would it be useful / appropriate to incorporate some feature for the user to "check" the link before checking the "Opt in for evaluation" button.
  4. I assume the agree to Peer Evaluation link will document the "terms of service and conditions" for the peer evaluation system. We need to cover a number of legal aspects here.
  5. Do we need to cater for an option for learners to "opt-out" of peer evaluation at any time? My gut feel is that this should be an option as in the case of providing user autonomy to unsubscribe for an email list.
  6. I think that it's advisable for us to incorporate a self-evaluation option in the system. If a submission is flagged as a self evaluation item, would it be appropriate to trigger the self-evaluation items immediately after the user has submitted to opt-in to the evaluation?
  7. This mockup is a good example of a mix of activities. For example, we would not expect a value judgement on a personal learning reflection, so the evaluation items would be restricted to yes / no validations. Conversely, number activities in the OCL4Ed example would present opportunities for qualitative value judgements. I will make a selection of OCL4Ed activities and develop rubrics which we can use for prototyping.

Screen listing assigned evaluations

  1. Would it be useful to include the due-date and time for the evaluations -- with automated conversion for local time zone. - For example "Please evaluate the following submissions for 2nd learning reflection by 15 June 2014 at <with widget for local time zone>."

Evaluation submission screen

  1. This is a proverbial debate among educators in the wiki community. Users with limited experience in browsing online content frequently get lost (eg they forget that they have a back button on the browser ;-)) Some learning designers would recommend that the link to the post to be evaluated should open in a new window or tab so that the evaluation form remains open. Something for us to think about - I'm not sure what the best solution is.
  2. Regarding the validation item (Is the content of the post in response to the activity concerned):
    • I would consider adding an optional comment field for the submitter to indicate a the reason why it is not related, eg incorrect url.
    • If the user answers No to the first validation question, the remaining items are redundant. Should we consider a behaviour where the remaining evaluation items only appear after the post is validated as a response to the activity?
  3. In cases which are supported with a rubric, we need to provide a link to the evaluation rubric which specifies the criteria.
  4. To improve completion of all the fields for a "valid record" I would suggest making the evaluation items required fields (comments optional). If not -- we will need to think about what constitutes a valid evaluation.

Do we need Mockup screens for:

  1. The learner's view of the ratings and comments from peers
    • An issue which will require considerable debate and discussion is the transparency of ratings. That is: should the evaluee see the identity of the evaluator?, should submissions remain anonymous and the user is provided with aggregated scores? etc. I think the best strategy is to prototype and get feedback from real learners using an online survey.
    • Do we provide the capability for the evaluee to flag "spam" comments -- this has been a big problem with some of the xMOOC experiences.

This is progressing well. The mockups have certainly helped us think about prospective issues and hopefully we can get a few decisions before delving into the code - or at least designing the code engine in a way that is flexible enough to tweak as we gain a better understanding of peer evaluation in the OERu context.

Mackiwg (talk)11:11, 20 May 2014

Some thoughts about Wayne's feedback...

  • Provide the option of a "Review-by date" - good idea to have some time structure. It helps to see several evaluations at one time for comparison.
  • Learner's view - The learner should see each complete evaluation. Perhaps the identity of the evaluator should be at the discretion of the evaluator or the instructor.
  • Some courses provide links to all learners' submissions and the cumulative score they received. Good opportunity to view what is considered "best."

Evaluation assignment - jumping ahead to who gets which submissions to evaluate. This a really interesting problem, so everyone gets "good" evaluations. One of courses made everyone do a practice evaluation before they gave you real ones to do.

Vtaylor (talk)22:51, 20 May 2014

Thank You for your thoughts. The evaluation assignment is a really interesting problem. One way to do so is random assignment which is straightforward and gives a solution to most challenges in MOOC's. An alternate approach is a karma system where instead of assigning evaluations, learners are free to evaluate whomever they want to. But, there would be an algorithm where, say they get more "karma points" for the first few evaluations which they evaluate. A simple mockup for selecting submissions to evaluate based on this.Also, they get more points for evaluating those submissions which have least evaluations till now. There can be a system in place where their "karma points" is reduced if they are evaluating the same person for different activities suggesting that they are evaluating their friends. Although, this approach would need serious thought but will help in not only judging the credibility of the assignment but also calibrating them and making them more accurate. Further, if these points could be carried forward across courses it would open more opportunities.

Akash Agarwal (talk)02:23, 21 May 2014

Thank You for the detailed feedback. Some of my personal thoughts about your comments and questions in the same order,

Mockup for submitting / registering an assignment:

  1. I will keep this in mind.
  2. Yes, or we could redirect them to the login page and then back once they are logged in.
  3. Prepopulating the URL's would be possible if unique tags are used for each user and activity. A simple way to avoid wrong URL's could be to show the content of the given URL and ask the user whether the content is correct after checking the opt-in for evaluation button.
  4. Yes, I had thought of it as a page that would mainly explain what peer evaluation is, including the requirements and workings. It will also need to contain the Terms of Service and associated legal aspects which I'm not much aware of.
  5. I agree that there should be this option in case the learner later decides that he does not want to spend time on evaluation or does not want to be judged.
  6. I did not include self evaluation in the UI mockups. We could also ask the learner to evaluate his own activity also in addition to evaluating the peers. We could then use it both for the grade and also to improve the "karma system".
  7. Some prototype activities would be very useful for the project. There will be a lot to learn and iterate from using it in some activities of the OCL4Ed course.

Screen listing assigned evaluations: Yes, I think displaying the due dates and times in the user's time zone would help learners in completing the evaluations in time.

Evaluation submission screen:

  1. I think we could simply open the link in a new tab and perhaps display a pop-up message so that they do not get lost.
  2. If the user checks No, the remaining items should not appear and instead we could show a comment box asking the reason.
  3. We could provide a link, or if it is short enough we could display it along with the questions itself.
  4. I agree.

Do we need Mockup screens for:

  • Yes, We need to give a thought on the transparency of ratings. I am not sure that whether we should show the evaluator's identity, but we do need to show the detailed evaluation in order to get feedback on them and for evaluee to flag them in case he disagrees with it. I agree that prototyping and then asking real learners would be the best strategy.
  • Yes, I agree that we should provide it.

In my opinion, the best person to answer and debate about some of these and other aspects of Peer Evaluation would be OERu educators and course moderators/supervisors who have an experience with MOOC's or plan the conduct some in the future. A response from more OERu educators about some of these aspects would be extremely valuable to the project.

Akash Agarwal (talk)01:38, 21 May 2014