Session 10
Contents
Finalizing Your Unit
Introduction
You have worked hard to write your first self-learning material unit. At this stage, it is in the first draft form, and should undergo various stages of polishing to make it a useful and effective unit. There are various ways of doing so, including editing, readability tests, and developmental testing. We shall discuss all of these here.
Readability Tests
While writing the unit, it is useful to assess the difficulty level of our writing to know whether we are addressing our lesson to the right target group. A simple process of knowing the difficulty level is to do a readability test using a variety of formulae. Some of these are discussed below:
Gunning Fog Index: Though the statistical calculation of a piece of text using Gunning Fog Index or Modified Gunning Fog Index is often criticized, it is a good instrument to start with to know the tone and style of your writing. The Modified Fog Index helps us to measure the ‘reading age’ of your writing. This means, if the score of the index is 16 for a piece of writing, we can conclude that person who is sixteen and above can understand the text easily. The Modified Fog Index can be calculated as follows:
- Count exactly 100 words from a paragraph of your text.
- Underline those words that have 3 or more syllables (a, e, i, o, u).
- Count the underlined words (A).
- Count the number of sentences.
- Workout the average words per sentence. Round up to the nearest (B).
- Add A and B. (C).
- Multiply C by 4 (D).
- Divide D by 10 (E).
- Add 5 to E and you will get Fog Index.
In a portion of the text of this document, I applied Fog Index formula to find that the reading age of the document is 9 years!
Flesch Reading Ease: Another better-known formula for calculating reading age is the Flesch Reading Ease, which is calculated by the following formula:
RE= 206.535 - 0.846w - 1.015s, where w = average number of syllables per 100 words; and s = average number of words per sentence.
The higher the RE, the easier the text. Harley (19994) gives the following table as a standard (see Table 8).
RE Value | Description of Style | Required Reading Skill |
---|---|---|
90-100 80-90 70-80 60-70 50-60 30-50 0-30 |
Very Easy Easy Fairly Easy Standard Fairly Difficult Difficult Very Difficult |
5th Grade 6th Grade 7th Grade 8-9th Grade 10-12th Grade 13-16th Grade College Graduation |
Cloze Test: Yet another simple test is the Cloze test for readability. In the Cloze test, every 5th or 7th word of a sample text is omitted, and the text is given to the target group to read it by filling the missing words. Readability is calculated on the basis of predicting the correct words, and a 60 percent score is considered satisfactory for comprehension, while 40-60 per cent indicates partial comprehension, and less that 40 per cent indicates inadequate comprehension.
Rowntree’s Complexity Quotient: Derek Rowntree (1996) suggests the following complexity quotient calculation for determining the readability of a text:
- Count the number of complete sentences you have on a page (A)
- Count the number of “long” words (three or more syllables) (B)
- Divide B by A to get complexity quotient.
If the score exceeds, 3, your prose is more difficult than that of most novelists. In a page of this document, the difficult quotient is 7.3. From the different readability test, we know that, there is variability, but we can still have an idea of our writing style in English.
Developmental Testing
Developmental testing is the process of trying out of the material developed. Normally, this is done at two levels: one at the end of writing, and two, before the launching of the course / programme. The result of developmental testing helps us to improve the quality of the learning materials. Depending on the time, we have two types of the developmental testing:
Face-to-face Tryout: Here you need to find out 4-5 learners who represent the target group for whom you have written the unit, and give it to them to read and provide you feedback on weaknesses and difficult areas. It is important to record the views of these learners and revise the unit to make it more useful. This exercise is easy to conduct, and cost-effective. The learners may be given the unit in your presence or may be asked to workout in a real condition. This task should be voluntary to elicit useful results, and the purpose should be clearly informed to the participants. In dual-mode institutions, this is much easier to do than open universities.
Field Trial: This is a more elaborate process, and undertaken after the final material is ready to go to press. Before the material is finally printed in large quantity, a group (25-30) of volunteer learners are identified who are supplied with the learning materials with a comprehensive questionnaire to complete after going through the material. In such a case, the students are in a normal condition of their learning and should be asked to take their time to respond without any pressure. The questionnaire may include the following items:
- Relevance of the objectives and contents
- Clarity of objectives and contents
- Adequacy of discussions
- Availability of personal guidance
- Relation of theory to practice
- Use of activities / practical skills
- Appropriateness of SAQs
- Usefulness of feedback to SAQs
- Language tone and style
- Design format of the material.
Editing
Usually your lesson should go three different types of editing: content, format, and language editing, before it is finalized for printing. The role of the content editor is to see that no information presented in the unit is wrong. He/She identifies the gaps in the unit, and delete / add irrelevant/relevant content to help the learner understand it. The format editor is an instructional designer, who plays the role of a surrogate learner, and edits the unit from the point of view of making it more useful (read learnable) by adding/ editing appropriate learning devices. The role of the language editor is to see that the presentation is readable and without grammatical errors.
“Good editing and design can work wonders to sharpen text, and bring out basic design structure that the author has managed to obscure, but if the basic material is hopelessly wrong for its purpose, no amount of skilled interpretation and re-organization will rectify it. The author’s responsibility is thus the heaviest one, and too often authors do not rise to it, but sink ingloriously beneath it. If we can find out why authors fail, we are on the right path to being able to do something to help them to succeed. If we can do that, it should make the work of those who produce their text more rewarding, the users will employ their time more profitably and the business will be great deal more economic than it is at present” (Orna, 1985).
Orna (1985) lists four main reasons for authors to mess-up their writing:
- Authors can be so immersed in the subject that they can’t empathize with learners having less knowledge. They assume a lot leading to logical jumps in their writings.
- Other authors have problems in their own understanding of the subject (with gaps and confusion), and when they attempt to write, it becomes obvious as their lack of authority is often signaled through confused and illogical sentence construction.
- Lack of understanding of the target group and objectives also becomes hindrance to good writing.
- Lack of professional skills in writing. Such authors, who are professionals themselves, pay little importance to writing and communicating their knowledge at lower levels. (Thinking it to be of low-level activity!)
Thus, Orna (1985) puts the author at a high level and says “if the author does not get it right, no one else can put it right”.