WikiEducator talk:Quality Assurance and Review
- [View source↑]
- [History↑]
Contents
Hi All,
In this thread, referring to "multiparty collaboration", http://www.wikieducator.org/WikiEducator:Quality_Assurance_and_Review/Featured_collaboration#Criteria
it says
- B) Multi-party collaboration measured by
- i) more than one author contributing to the learning resource.
- ii) editing and discussion history toward meeting the featured learning resource criteria
Referring to Point (i) more than one author contributing to the learning resource, how are you planning to assess this?
- Is it collaboration on the wiki, or off wiki too
- is it collaboration in the discussion group, which then spurs a project team onto joining up, and developing the resource?
I'm making these points, because I see collaboration taking many forms, and NOT only on the wiki.
Randy Fisher 17:47, 17 February 2009 (UTC)
I have completed all five of the featured works types, some contribution / review would be most appreciated...
Do we need people to indicate the type of volunteer they want to be?
I am involved with this WE QA initiative and we had a request for volunteers that was quickly and completely subscribed. And now we are two weeks into it, I have posted a couple of questions and none of the volunteers have engaged in discussion or added content. So maybe when we make a call for volunteers we ask for volunteers of a certain type. I will suggest the following;
- Contibutor - someone who is active in moving the project forward
- Reviewer - someone who watches and reads the project and occassionally adds suggestions, makes comments, etc...
- Approver - someone who has some credability in the projects subject area who endorses the project deliverables
So now when we make a call for volunteers we can be more specific and people can indicate the type of volunteer. As an example with this QA project we could make the following call for volunteers;
Hi Peter
Wow -- its inspiring to work with dedicated WikiEducators. I know that one or two of the volunteers in the group are offshore at the moment, and may not have had a chance to respond yet.
I do like the concept of categories of volunteers. Earlier in the WE project -- we started formulating the concept of Roles in the community. Unfortunately this has not received much interest to workloads etc. I think we should try and combine these QA and development roles with these ideas.
The basic concept was to recognize that there are many different roles in our community and alternative paths for collaboration. The idea was to:
- describe the roles
- develop a series of User templates where WikiEducators could identify their preferred roles
- to develop support materials showing people how to use these templates and provide ideas for contributing to our community
- the templates would automatically contain categories to help people find each other, for example a category page for all the learning designers in the community.
Perhaps we need to think about revitalizing the Roles concept and getting the work done.
Wayne,
Give me a few days and I will extend these role descriptions to include this concept of level of engagement. I believe it is a good idea, so if someone says they want to be a multimedia developer within WE they also define how involved they want to be. So you could be a multimedia developer who is a full contributor to one project of WE and then they could be multimedia reviewer for another project. This way we can set a projects expectations among all the members as to how much each member wants to be involved...
Before assuring quality, one has to define quality. In a static environment for example a printed book, quality will be as good as the final product.
With a new concept like WE quality will automatically improve as the final product will not be static.
From one point of view it may be more important to protect the original work and allow further changes and contributions to enhance or/and facilitate knowledge.
This unique feature of adding and changing may be the key to the success of WE. For some students to understand the process and see how certain portions were added is a natural way of learning, like in a documentary.
A certain way of presenting a subject may be easily understood by some and not others. Therefore for me quality will be as good as the archiving process in a medium such as WE.
Nadia,
I agree with all that you say here. I think the quality we are looking for is the depth of learning or the facilitation of knowledge aquisition. And how do we protect the original work yet also allow it to be enhanced. I believe it is important to recognize that different groups communicate and learn differently. I believe that different groups would also see quality differently... We need to have our quality initiative set up in such a way so it allows creative freedom, encourages contextual learning and a contextual definition of quality. There are aspects of the MediaWIki software that protect the versions of work. I also believe that the idea of having a featured resource and a featured reuse is a good place to start. I believe this discussion thread speaks to the dissonance of making no assumptions of what quality should look like in WE and the requirement of having a quality story within WE.
I believe if we have a number of good (non-restraining) context neutral featured items (learning resource, reuse material, institution and project) we would be starting QA within WE... How the community takes over from there would be an interesting thing to participate in...
Peter, Nadia
I think that we have the principles for the development of our QA and review process well defined and we will see to it that we have adhered to the principles. Its our QA measure <smile>.
In addition to the thoughts expressed here, I think WE should also have a mechanism for some kind of optional peer review. This can be enabled with technologies like [Revisions]. Therefore -- without the need of locking down pages -- users can opt for the latest peer reviewed version or the latest unreviewed edits. Some course will use this feature, others wont. I suspect that we might consider materials of different "levels" or phases in the QA system, for example:
- Draft phase - single author. Does not want QA feedback, so no templates like "needs citation" are placed on the page.
- Draft phase - collaborative authors. Does not want evaluative feedback, so no templates like "needs citation" are placed on the page
- 2nd draft phase - Opening up the resource for community QA input and feedback which then gives "permission" for all sorts of QA related templates to be inserted on the pages.
- Optional nomination for featured resource and featured reuse and corresponding community processes which take a decision on this
- Peer reviewed content -- where some trusted group expresses a value judgment.
This will need a lot of work, thinking and refinements -- but gives some idea of a typology within our QA and review thinking.
Do we need to use inspirational symbols, templates, slogans, which are content related and culture-inclusive to heighten the value awareness and quality conviction?
Yes, I think that symbols, templates, etc... that are content related and culture-inclusive are a very important aspect of quality within OER. I believe that some of the principles and practices identified within the maturity model should "evaluate" these contextual elements.
Thank you Peter for your engaging response. Please provide the link for "maturity model", if available, or elucidate the concept a bit. You may also check out my posts in the COLLaGE Community regarding culture. I look forward to your observations.
Hi Peter,
Do you have any examples to illustrate how this might work. I'm trying to get my head around the differences between localisation versus culture-inclusivity as a criterion for quality.
Education is contextually bounded and culture is a determinant for educationally relevant materials. I'm not sure that it's practicable to develop indicators or criteria for culture-inclusive materials that are universal. For example -- the visual design of websites in South East asia are very different from the Western web design. Design elements which are considered good practice, for example in India would be considered poor design in the UK.
In this regard WE is really a pioneering project. In the case of an encyclopedia article authors are working to develop an objective article on some concept or topic adhering to principles like NPOV. When it comes to culture -- I'm not sure that its possible to implement the cultural equivalent of NPOV -- if you know what I mean ...:-)
Lets say, for instance, that Uganda develops some Philosophy materials on the meaning of truth, I would imagine that Asian philosophy might have a different take on the topic. I do think that there is tremendous value for indigenous communities to express their own world views in their respective educational materials --- especially for other learners around the world to learn about different cultures.
mmmm -- I'm not sure how we would integrate this into a QA framework.
Hi,
Please expand NPOV. It would help newbies to think faster..
Thank you in advance.
Dr.Ramakrishnan
Wayne,
I agree. I do not think you could find universally held indicators that are also culturally sensitive when defining quality. This is why a maturity model is so effective, it is subscriptive not prescriptive. It is not how we measure (or indicate) the quality, it is that we are measuring (or indicating) quality. This is maturity. I believe this discussion would fall into the Development Process category of the e-Learning Maturity Model. There are two process that I believe relate directly to this discussion of culture and context, they are;
Development: Processes surrounding the creation and maintenance of e-learning resources
- D2. Course development, design and delivery are guided by e-learning procedures and standards
- D3. An explicit plan links e-learning technology, pedagogy and content used in courses
As you can see niether D2 or D3 describe how something should be done they just describe that they should be done. And the extent to which something is done is how maturity is determined. So as the context / culture change so do the procedures, standards, technology, pedagogy and content. From here we would have to get into the dimensions, I believe that is another discussion... I hope my view of maturity models applied to culture / context made sense...
Peter,
I'm clear about capability models and how they work. That said, I'm not sure from a pedagogical point of view that you could have:
- D3a. Course materials are designed for culture-inclusivity
My reason is that education is culturally bounded and per definition cannot be inclusive of all cultures.
So perhaps the capability process is:
- D3b. Localisation procedures are guided by the principles of culture-sensitive adaptation
I have no problem with the capability model as a component of our total QA model. This will be an impressive addition to WikiEducators suite of solutions.
We can measure the capability of our community or sub-sets of projects. But I'm not sure how you measure the capability of content -- if you see what I mean. What I'm thinking here is that in addition to the capability maturity model is that we need a mechanism and supporting processes to say the Article X meets the quality standard -- Almost like an ISO quality standard, however we choose to define it.
I was about to start adding to / editing the Featured Learning Resource page and I was thinking we should add some academic rigor to this QA work by providing as much reference as we can. I believe this is appropriate as Wayne suggests that transparency is important. I also think referencing a lot of the great research about educational quality, wisdom of the masses, historyflow, etc... would add credibility when we are telling the world about our processes. In particular, the academics who are asking about the quality. I was thinking we should create a QA references page, where each reference has an anchor, so we link to it directly from the Featured criteria page... What do others think about the importance of referencing in this QA initiative?
Absolutely!!! I'm all for rigour in our approaches --- lets cite away.
We must practice what we preach!!
User:Kruhly has been working on implementing citation templates for using the Harvard method of citation in WE.
We need to follow up on:
- How far Rob got with the citation work -- I seem to recall that there was some server setting that needed attention. I think this was posted on the Techie discussion list -- I'll look into this when I get today's work up to date.
- Developing tutorials on how to reference in WikiEd -- perhaps this should be combined with work on a style guide.
This session looks interesting and relevant to this initiative; Assessing the Quality of K-12 Online Content. If you don't have the time to view, the two requested readings include some good references. I've also added a references subsection on this projects main page; http://wikieducator.org/WikiEducator:Quality_Assurance_and_Review#References. If you also have some good references please add them. I believe it would be good for us all to be reading from the same pages as we further develop the WE QA approach.
I'd rather the Featured Teaching Resource be renamed to either;
- Featured Learning Resource
- Featured Educational Resource
I believe this is more reflective of the self directed nature of WE and OER in general... What do others think?
To expidite this initiative should we simply set a quality bar by describing the quality (criteria) of a featured article and a featured reuse. We would then have a level of quality for contributors to aspire to as we would have accompanying featured_article_criteria (http://en.wikipedia.org/wiki/Wikipedia:Featured_article_criteria) and featured_reuse_criteria. We would have the anchor of these two criteria to drive this initiative without getting "bogged" down with analysis and the process of discussing quality... Refer to this google group discussion thread if you want more on this idea; http://groups.google.com/group/wikieducator/browse_thread/thread/8196f708f9373881
Hi Peter, I think this is an excellent idea.
It's a starting point for us to think about the many facets of quality. What I also like about this approach is that it would be optional -- a project is nominated and then evaluated by the community. In this way we avoid alienating newbies and as you say give the community something to aspire to.
In addition to the featured teaching and reuse resources, I think we should also add categories for the feature institution -- ie a reward and incentive for institutions who adopt WE in a substantive way and also innovative projects that contribute to the sustainable development of OERs.
I've started a couple of sub-pages for each of these categories so that we can work on criteria, processes and supporting tools for the featured candidates.
How about we start by asking for nominations for featured works, featured reused works? I don't think we should set out with a quality grading at this point (if ever) but rather be looking to spotlight examples of work and include interviews with the key people involved (what they are trying to achieve, the challenges they faced, the successes, what they like about using Wikieducator, what they don't like. I don't think we need to follow Wikipedia's featured article based on some level of quality. I think it would be more constructive at this stage to take nominations - perhaps based on interest, innovation, numbers of people involved, numbers of people who second the nomination etc. If we used this approach to simply build a diverse range of case studies, then I think a measurement tool for quality might emerge from that.
Hi Leigh --
I like the idea of an "under the spotlight" and/or case study feature which we should include as a prominent section on the front page. Good thinking. This is a great way, not only to feature projects --- but to get the human interest angle behind the scenes.
That said, as a education community we need to tackle the question of quality and how we will manage these processes without alienating those who are starting out in WE. The most frequent question I get when I speak around the world about WE -- is What about quality? At the moment -- the best answer I can give are referrals to other wiki projects.
I would take tremendous pride in telling the world that we are doing a, b and c around quality processes. That the processes WE developed were developed openly and transparently. That we have learned from the benefit of hindsight from other projects like WP, WV and WB.
I'm not in any way suggesting that this is going to be easy -- but it is a nut we need to crack.
I'm hoping that we can use one of your courses as a test case for the processes we envisage -- this way we can iron out many of the challenges we're going to face.
What are your concerns about optional and self selected quality grading? This would help us in finding the right solutions.
Cheers
WE subscribes to four core values that guide our work. We believe:
- In the social inclusion and participation of all people in our networked society (Access to ICTs is a fundamental right of knowledge citizens - not an excuse for using old technologies).
- In the freedoms of all educators to teach with the technologies and contents of their choice, hence our commitment to Free/Libre and Open Source technology tools and free content.
- That educational content is unique - and by working together we can improve the technologies we use as well as the reusability of digital learning resources.
- In a forward-looking disposition working together to find appropriate and sustainable solutions for e-learning futures.
I'm wondering whether or not a commitment to developing high quality OER shouldn't be specified as a core community value -- or is this subsumed in the uniqueness of educational learning materials?
Be keen to hear what the community think.
Do we have the right assumptions and guiding principles for developing a QA and review policy?
This is a new thread to inviting the community to provide feedback on the assumptions and guiding principles we're using to inform the development of a QA and review policy for WE.
Cheers
It might be interesting to tie the quality measure/ratings to each article's history page rather than the article page itself... then it would be obvious what changes had taken place since which reviews.
Hi Jim,
I like the notion of having some measure of quality based on the number of "eyes" by measure of unique edits, who have contributed to an article page. You're right, from a technical perspective this is a measure of the history page rather than the article itself.
I've not looked at the Flagged Revisions extension in detail yet --- so not sure how this might relate to the history page itself.
Cheers
A fair bit of research ties quality back to a healthy workplace. Without a healthy working environment how do people produce quality? I believe this discussion also needs to include the concept of a healthy workplace within a collaborative wiki environment.
Hi Peter,
mmmm -- I like the concept of a "healthy" wiki environment. Over and above our community values and principles already stated -- have you given any thought to the characteristics of a healthy wiki environment?
What are the indicators of a healthy wiki environment? Would be interesting to see peoples thoughts on this.
Cheers
Wayne,
I agree, it would be good to know what is a healthy wiki environment. particulary in the context of how it ties to people creating high quality OER content. What are the characteristics of a healthy wiki environment? One where the materials are of highest and exemplary quality. If the resources aren't exemplary and CC-BY-SA why would people reuse them? As an educator I seek out exemplary OER, otherwise I create them myself. To a certain dgree I would think the health of OER is measured by its reuse. If it isn't being reused, maybe it is a quality issue.
I think this question should be put out to the google group...
Peter
Thanks Peter,
I now understand what you mean by a healthy wiki environment. Makes good sense to me.
I'll post something on the main list inviting people to comment -- Given the substantive nature of these discussions which will ultimately lead to the development of a consensus policy -- I think we should try and encourage members of the community list to post their thoughts in this forum, so we have a good record in one place dealing with the QA developments.
We can post regular updates on the main list with links to the specific questions.
What do you think?
Peter,
Your connection between capability maturity models and wikis is serendipitous. I'm planning to launch a big project in the Pacific region in our new budget year drawing on the eLearning Maturity Model. This model is available under a CC-BY-SA license.
- Do you think this model could be adapted/refined for the wiki enviroment?
- If so -- what sort of refinements would you envisage based on your research on quality in Wikis?
Thanks for the link to your research on quality in wikis :-)
Cheers.
Yes, I do think the eMM could be applied to wikis. I do think we should review the whole to see how it could be used. I believe the eMM project is focused upon the maturity of existing geographically based institutions. The OER based wiki is somewhat different. So I believe a review would be good... I think the refinements would be aligned with what mediawiki has been doing with their quality initiatives. Where some of the quality assessment is automated based on reputation of the author, frequency of edits, revisions, etc... I believe what we will come up with is something of a hybrid of maturity models, eMM, mediawiki efforts, historyflow, information quality, education quality, etc... I think the team who is going to take this on would have to do a fair bit of background reading so we can create a lively and creative discussion.
Be Well...
Agreed --
We need to refine and build a hybrid maturity model that suites our needs and takes into account the unique characteristics of a self organizing system. WE also need to take into account the differences between WMF projects and our initiative. At a crude level, developing an encyclopedia article is different from developing teaching materials that are intended for use in both formal and informal educational settings.
See my comments earlier about WE being an "organisation".
Peter appreciate your inputs and expertise -- I have a sense that WE may become a world leader in maturity models for open OER authoring environments <smile>
The E-Learning Maturity Model (eMM) provides a means by which institutions can assess and compare their capability to sustainably develop, deploy and support e-learning.
I think WikiEducator needs to think about its similarities and differences with "traditional" institutions that are moving into the eLearning arena. I think this is more the focus of the eMM. This does not mean that we can't take all that is good from eMM and "refine" it the WikiEducator.
My $0.02
Peter
Just had a thought - regarding the starring of degrees of completion, of a given educational document...
In much the same way there are degrees of accomplishment in the WikiEd certification process, could there not be something similar regarding the denotation of quality, or degrees of completeness? In the WikiEd / L4C world, users are clamoring for their certification - amazing isn't it? Could we not do something in this way, so that folks have a similar source of pride in completing, and a sense of being a member of our community?
Just a thought...
- Randy
Hi Randy,
Yes I do think that there are distinctive phases or degrees of completion. I suspect that these will correspond with the generic phases of the learning design process, for example:
- Design plan or blueprint for development completed, which could include, for example:
- a Brief analysis of the intended target audience
- Teaching objectives specified
- Planned structure of the resource completed
- ContentInfobox inserted on the main page
- Request or invitation for feedback on the resource posted
- Resource completed and ready for QA review
- All content completed in accordance with the design plan above (Content Design)
- Navigation templates inserted (Visual Design)
- Images, figures and graphs uploaded (Visual Design)
- Editorial comments and suggestions from the community incorporated into the resource
- QA review completed
- Content validity / reliability reviewed
- Educational Design (i.e. Visual elements, pedagogy etc reviewed)
This is just a tentative example / suggestion to show how we could link generic phases with concrete outputs.
I agree --- the pride in completing a resource (or requesting certification for wiki skills attained) is a powerful motivator and also a vehicle for individuals to be recognized in the community.
Appreciate the inputs ... thanks!
Randy,
I completely agree with the degrees of completion. This is why the suggestion of using a maturity model. It provides a comprehensive set of criteria (or practices) and depending how well they are met shows maturity. What I find most appealing in a maturity model is how it is more subscriptive than prescriptive. It isn't how you do something, it is more that you are doing something. An example taken from eMM L1. Learning objectives guide the design and implementation of courses. and depending how available the objectives are would determine the level of maturity... So in the end we can review a lesson, module or whole course and give it a maturity level depending on how well it implements the practices. Again it doesn't say how to implement, it just says that you should if you want to achieve a certain level of maturity. Therefore any lesson, module or course (regardless of where they are within their development) already has a level of maturity...
I believe the eMM2 is a good starting point, though I also believe it would need a complete review cause there are differences between wiki based OER and eLearning emerging from a "traditional" institution.
All good...
Peter
Hi Peter & Randy
Yeah -- we said in our guiding principles that quality is an illusive and complex topic <smile>.
What I like about maturity models is:
- the underpinning logic that an organization or community's quality achievements are closely linked with the capability of the orginisation/community to implement quality processes;
- the notion maturation, so for example WE can compare how our community is maturing in comparison to previous years or in comparison with other OER communities.
- maturity models can be used to visualize capability in core processes (that enable organisations/institutions to scale up operations so that quality is promoted across the project rather than pockets of innovation). This visualization will help the community determine where we need to expend effort on becoming "more mature"
Specifically, I think eMM is an excellent starting point. Apart from its solid foundations -- the model and its documentation are available under a CC-BY-SA license which means we can adapt, refine and reuse the model without restriction.
Two thoughts:
- In many respects, WE is an organization (albeit a social self-organizing system.) -- So while WE is not an organization in the physical sense -- I do think WE can assess its own capability and maturity as an entity or community. Of course eMM would need to be refined and adapted to take into account the unique features of a social network and self organizing system.
- We need to think carefully about the distinction between a capability maturity model and the actual processes of QA review. Our processes and procedures we adopt for QA and review will mature over time. The maturity model is the vehicle for us to asses the maturation of our processes overtime -- in other words the maturity model doesn't express a value judgment on the quality of the materials.
In your view, how long should we take in developing a draft QA and review policy?
What is a reasonable time for our community to respond and engage in the discussions?
I believe it would take about four months for a draft. I also believe a review policy should be replaced by a maturity model. To make the four month deadline I believe a small group of people (3-5) would have to be engaged every day or two, giving at least an hour or more time for each engagement. I also believe the small group would have to set aside time to author the draft and review a comprehensive set of research regarding quality in education, open efforts, maturity models and information quality.