CTVSD2/Introduction/Strategies
As discussed in the paper the success or failure of a competency-based assessment depends on the quality of the assessment methods employed and on the methods of monitoring student progress and recording competence achieved. There are three fundamental questions which must be answered before a valid and reliable assessments can be developed.
- What forms or types of evidence should be sought?
- How much evidence is required?
- Where should the evidence be provided? (Or who has the responsibility for making the judgements?)
Answers to these questions determine the shape and size of the assessment.
The assessment strategies outlined in this paper take different approaches on these questions. They are different because they answer the three fundamental questions differently. They also base their judgements about competence on different forms and amounts of evidence and adopt different procedures for obtaining this evidence. They are also different because they are based on different views of competent performance and they face different practical problems in obtaining the evidence required.
Assessment based on samples of performance
One of the often used strategies is assessment based on samples of performance. With this strategy the major evidence of competence is derived from samples of performance on specially arranged assessment events such as practical tests, exercises and simulations. These are generally supported by what is referred to as "supplementary evidence". Supplementary evidence is derived from written and oral questions and multiple-choice tests. The practical assessments are designed to measure the technical or performance aspects of competence, while the written and oral tests are usually designed to measure underpinning knowledge and understanding.
This approach to assessment requires that criteria are worked out for each assessment event on which judgements about competence are based. Assessments based on these criteria are generally made on a pass/fail basis. Learners are generally assessed individually when they ready and judged as 'competent' or 'not competent'. Records of progressive achievement are kept as individuals re attempt (multiple attempts are usually allowed) assessment events until competence is achieved.
A common challenge with this assessment strategy is a tendency to fragment the assessment of competence and to assess many separate elements at the expense of more holistic assessment. That can be overcome by drawing together a number of elements into integrated tasks and assessments.
Another challenge can arise from the complex and time-consuming recording of learner progress, required to keep track of competence achieved and not achieved, number of successful and unsuccessful attempts for all learners and all elements of competence. This is particularly challenging if the process of record keeping is manual. The implementation of learning management systems (LMSs) or computer-managed learning and assessment has made this process more efficient and cost-effective and providing adequate records.
Performance evidence from natural observation in the workplace
A second strategy for assessment of competence is one based on performance evidence from natural observation in the workplace. The major source of evidence is the observation of natural or routine performance in the workplace. Assessment on the job is based on the observation of work performance. It is carried out by supervising tradespersons each time a trainee does work in one of the competency areas as part of the 'normal daily work routine'. Records of each experience are maintained. This approach appears to have validity as the performance being assessed is real workplace performance under real conditions and is related to the units and elements of competence set out in the statement of standards. Assessment is holistic as it is based on real work which generally requires the integration of skills, knowledge and attitudes.
To be effective this approach requires multiple observations of natural or routine work performance of all elements of competence. It usually involves multiple assessors to ensure quality control, tracking and recording of competence achieved. The extent to which this strategy is reliable, fair to all trainees and generally practicable and cost-effective depends on:
- adequate quality control to ensure consistency across assessors;
- the quality and range of workplace experience and training that can be provided by tradespeople;
- sensible decisions about the number of observations of performance required to make a reliable judgement about competence as the combination of circumstances in range statements and performance criteria can lead to a multiplication of assessment events.
Evidence from prior achievements or learning
A third strategy of assessment is based on evidence from prior achievements, where judgements about competence are based on recognising, assessing and accreditation of prior achievements and learning. This is often achieved by learners providing a portfolio of evidence documenting their achievements. The portfolio may be supported by evidence from supplementary sources such as interviews, oral and written questions and simulations. This approach is more likely to be used to assess higher levels of competence standards such as those in management or supervisor training, where natural observation of performance in the workplace may not be a viable option.
To ensure that the required standards are reached, steps need to be taken to ensure that the process of evidence collection and documentation is more than a file-keeping exercise. To enhance reliability, candidates can be trained in documentation and reflection techniques and assessors would need to have expertise in the facilitation and assessment of these techniques.