|Thread title||Replies||Last modified|
|End of project review||1||09:35, 20 August 2014|
There is a lot of functionality here!
- needs more explanation of the overall ideas about peer evaluation and functionality provided
- start with some simpler examples first
- explain actual coding notation - looks like comments, some are actually control parameters, difference between questions and grading criteria
- code readability - where does spacing matter, blank lines, actual comments
- roll-up of levels is complex and needs further explanation
Is there more information about the results of the peer evaluations for the instructor?
Thanks for your feedback.
- I will add some simple examples which would also include screenshots of how the evaluation forms generated using those rubrics will look like.
- I agree that the current format of rubrics makes it quite complicated for an instructor to write the wiki code for it.
- Also, support for three different kinds of rubrics, which are fully customization is a lot of functionality and require a bit more detail about them. I will try to add a detailed explanation.
- I'll try to have clear explanations of the coding notations used. Also for the next iteration of the tool, I will create a special page that enables one to generate the rubric code via a web based interface consisting of interactive forms.
Currently there is no interface to access the detailed information about the results of peer evaluation in a consolidated manner, like that in a dashboard. But, the *viewevaluation* tag allows one to view all the information related to individual evaluations.