PNG FODE Coursewriters Manual
- create a manual to guide FODE Coursewriters in writing courses for secondary students
- write simple guidelines collaboratively for coursewriters
- use guideline in writing FODE courses
The Current situation and the need for Coursewriters' Manual
FODE coursewriter performance has been an issue over the years as writers don't have the clear guidelines of how to write lessons for distance students. Most of FODE's course materials are very old and outdated and they are not really user-friendly for they look so formal and appear like they are textbooks.
Because of this situation, they idea of coming up with a coursewriters' manual was conceived
Nationally, education sector performance has been an issue among citizens. Almost every month, taxpayers have challenged DoE to demonstrate value for money in the provision of education service. The relevance of DoE and its mandates has been questioned in a country of increasing school population and emerging change. Similarly with regard to programs and projects, the increasing proportion of problems, projects and the often unreported or unsatisfactory performance of completed projects emphasised the need for systematic M&E. Available evidence highlights that a significant percentage of DoE divisions and PDoEs have no systematic M&E, and there has been no departmental policy on research. There have been development programs or projects implemented in the past, and there has been no review or evaluation if these programs/projects have achieved or failed to fully achieve their envisaged development objectives. It has been widely accepted that timely M&E and the use of reliable evaluative knowledge or action research can help DoE to improve policy and program/project design, increase returns on investments and speed up the implementation of on-going projects.
The department is conscious and mindful of the fact that, at present, a high proportion of M&E resources are devoted to monitoring the physical and financial implementation of programs/projects and little attention is devoted to assessing the results in terms of sustainability of projects, delivery of services, quality of education outcomes, or achievements of the education reforms in various parts of the country. Monitoring and Evaluation Systems in the past have been implementation biased, and tended to be disbanded on termination of the programs/projects. They have been data rich, but information poor. M&E in many cases are donor driven. Misperceptions of, and lack of local demand for M&E are other problems that inhibit the practice of M&E or the conduct of research. These issues need to be addressed immediately.
With increased focus on sector wide approach (SWAp), the need to achieve results from various interventions coming from donor agencies and other stakeholders has become extremely important. This enormous pressure to demonstrate results has led to the introduction of results-based management approach this year in almost all the DoE division and the provinces. The need for planned and systematic M&E at national DoE and PDoEs is therefore timely. This becomes even more crucial in DoE, at a time when the department move into the SWAp in 2008. The formal adoption of an M&E and research policy and its implementation would set up an enabling environment to continuously track progress, review performance and fine tune policies in order to realize the vision and goals of the National Education Plan (NEP). Furthermore, creation of a suitable policy environment for M&E and research completes the tools package necessary for systematic M&E and links performance monitoring and evaluation to policy through the Policy, Planning and Research Division of DoE.
Objectives of the M&E and Research (MER) Policy
The MER Policy is intended to achieve the following objectives:
a. Promote the correct understanding of MER and create an M&E culture among DoE’s and PDoEs’ managers and staff to encourage them to 'manage for results'.
b. Promote the practice of M&E through catalysing the generation of necessary human and organizational capacities, tools and methodologies.
c. Enable Learning of lessons from past experiences to identify the kinds of policies, programmes, projects and delivery systems most likely to succeed and factors most likely to contribute to that success.
d. Improve the design of development policies and programmes through effective integration of evaluation or research findings into the policy formulation, reforms, planning and budgeting processes.
e. Establish accountability, transparency and good governance.
Fundamental Principles of MER Policy
The MER policy is based on the following fundamental principles:
M&E are practical assessments serving practical ends; while research are undertaken for advancement of knowledge or acts of policing and will be used to enhance and deepen M&E findings.
M&E should be seen primarily as an instrument of accountability and lesson learning. All DoE divisions and PDoEs should be encouraged to use results-based M&E for all their programs/projects and activities.
All types of evaluations—Ex-post, impact and mid-term evaluations—that serve different purposes, and which are conducted at different phases of the project cycle need to be encouraged. In order to have a wider perspective of development, the department accords special attention to the evaluation of the NEP, DoE’s and PDoEs’ performance, programs and projects, policies and thematic areas.
The M&E findings and lessons should be linked to the policy formulation, reforms, planning and budgeting processes.
M&E and reporting should be made mandatory for all DoE divisions and under certain circumstances to the PDoEs. Adequate provision should be made for them up-front.
Use of performance indicators and the use of results-based framework or logical framework based approach should be made mandatory for all policy, programme or project preparation initiatives, thereby making it possible to subsequently monitor and evaluate them meaningfully.
The department as a learning organization should learn from M&E findings and communicate and share information with other stakeholders. Findings of M&E on educational policies and programmes should be readily available to public and media. DoE has to maintain these information in an intranet or DoE website for this purpose.
The civil society organizations (CSOs), private sector and academics should be encouraged to undertake M&E or conduct research in partnership with DoE.
The national DoE and PDoEs are responsible to ensure that results-based M&E is well understood and put into practice by all stakeholders at their level.
It is department’s policy that all institutions working in the education sector embed M&E into their development management practices.
Scope of M&E
1. All DoE policies, programs and projects should be monitored on a regular basis. It may not be advisable to evaluate all development programs, for both practical and financial reasons. However, DoE should carryout at least two or four evaluation or research studies in two/three years depending on the availability of funds. Further, the Research and Evaluation Committee (REC) through the assistance of the Monitoring, Evaluation and Research (MER) Team would identify areas for evaluation on a rolling annual or biennial plan.
The REC should screen and select suitable projects for evaluation for final endorsement by the TMT. This will enable the PPR through the RDA to track the current evaluation studies. The following criteria should be used by the selection committee when selecting programs or projects for evaluation.
1. Policy relevance e.g. poverty reduction,
2. National importance and the scale of funding
3. The innovative value and replicability of project or programme.
4. Public interest and the nature of the problem.
Implementation of National Evaluation Policy
Implementation of the National Evaluation Policy is the responsibility of all ministries and agencies involved in national development work. The Ministry of Policy Development and Implementation (MPDI) shall provide necessary assistance and guidelines, training and refresher courses regularly to implement the national evaluation policy more effectively. The Central Performance Evaluation Unit (CPEU) of the Ministry of Policy Development and Implementation will serve as a focal point for implementing the National Evaluation Policy.
Each Sectoral Ministry when initiating independent evaluations within their own areas of responsibility should, in consultation with MPDI, involve private sector, universities and CSOs. The respective sectoral ministry in consultation with the Central Performance Evaluation Unit (PEU) of the Ministry of Policy Development and Implementation and other relevant stakeholders should develop the terms of reference (TOR) for such evaluations. The Central Evaluation Unit on the other hand is responsible for more comprehensive and strategically important evaluation of a thematic nature.
MPDI in close collaboration with professional evaluation associations should develop and promote evaluation culture, standards, guidelines, methodologies and best practices; monitor and develop evaluation capacity; sensitise policymakers; and facilitate the dissemination of evaluation findings.
The evaluations initiated by the Sectoral Ministries would tend towards providing learning experiences, while those conducted by the Central Evaluation Unit of the MPDI would tend to be more accountability and policy influence oriented. Some form of compromise is needed between accountability and lessons learning, and it may be necessary to maintain a balance between the two.
Dissemination of Evaluation Findings
All Sectoral Ministries should forward reports of evaluations to the Central Performance Evaluation Unit of the MPDI. This will enable evaluation findings to be synthesized and linked to the Evaluation Information System (EIS) of the National Operations Room (NOR) of the Ministry of Policy Development and Implementation, and the Economic Policy Committee (EPC), to ensure integration of evaluation findings into the policy, planning, budgeting and reform processes. Evaluation information should be made accessible to the general public. The Sectoral Ministries should, after the completion of the evaluation dissemination workshop, prepare an action plan, which should define specific follow-up action, timeframes and responsibilities, and ensure that findings are integrated. Copies of such plans of action and reports on their progress should be submitted to the MPDI.
The project proponents and the national planning authorities should ensure the incorporation of evaluation findings in the formulation of new projects and programmes. The project submission formats and related procedures should be suitably modified to internalise evaluation findings into the planning, budgeting, and policy formulation processes.
Guidelines, Methodologies, Standards and Ethics
Evaluation should examine the relevance, efficiency, effectiveness, impact and sustainability of policy or programme initiatives. Evaluation methodology should look into the financial, economic, social, environmental, gender, institutional and sustainability aspects. The use of financial and economic cost benefit analysis to assess the return on investment needs to be encouraged. Moreover, the evaluation methodology should integrate social and environmental concerns. Beneficiary assessment should form an integral part of evaluating social programmes. Concerns for evaluation should be integrated at the time of planning and formulation of the project. Use of Logical Framework Analysis (LFA) with well defined performance indicators at the time of project preparation is mandatory for projects which are over US $ 10 million. Projects less that US $ 10 million should also be encouraged to use the LFA approach whenever possible. As evaluations are practical investigations and not scientific research studies, the use of simple, cost effective and less time consuming participatory rapid appraisal methodologies may be used where possible.
It is also necessary to develop local evaluation methodologies, guidelines, standards, ethics and practices in line with accepted international standards. The Ministry of Policy Development and Implementation in collaboration with the Sri Lanka Evaluation Association and other CSOs should undertake this task.
Capacity Building and Partnerships
The availability of adequately skilled and competent human resources in evaluation is essential. Government recognises the need to build a professional cadre of evaluators and accords high priority to capacity building efforts. The Universities and public sector training institutions should be encouraged to run evaluation modules as part of their normal programme. The government would also encourage joint evaluations and regional networking to share knowledge on evaluation techniques and methodologies.
Sectoral ministries should strengthen their capacity for performance evaluation, ex-post evaluation and impact evaluations in their area of responsibility. The Ministry of Policy Development and Implementation must provide central direction for evaluation and should (a) upgrade the Central Performance Evaluation Unit as a centre of excellence to provide leadership, guidance and support to the practice of evaluation; (b) use evaluation findings where appropriate in decision making; (c) set standards, ethics and best practices; and (d) monitor evaluation capacity in the public sector.
The Central Evaluation Unit of MPDI jointly with professional civil society evaluation organizations will assist sectoral Ministries to build evaluation capacity, to develop standards and methodologies and to upgrade the capacity of their staff. As part of the efforts to build a local evaluation consultancy industry, the Sectoral Ministries may outsource evaluation work to private sector and civil society organizations (CSOs). Government will encourage such collaboration and partnership with NGOs and CSOs to introduce participatory evaluations in the public sector.
Many donor-funded post evaluations have been conducted by donors themselves without much in-country participation. Such a unilateral approach, though helping to ensure objectivity of evaluation, does not assist in the development of in-country capacities nor does it help to link evaluation to overall planning process. The Government will encourage donors to strengthen in-country evaluation capacity. Moreover, all evaluation missions on foreign funded projects and independent evaluations should have links with the Central Performance Evaluation Unit to ensure central coordination on evaluation. A documentation centre would be in place at CPEU to provide access to all the evaluation reports.
Consultants and Contracting
The Sectoral Ministries shall select locally qualified, competent and experienced professional firms or individuals whenever possible. The government is committed to promote domestic capacity in evaluation. Joint ventures between domestic evaluation professionals and foreign consultants should also be encouraged to transfer knowledge and skills on evaluation methodologies, techniques and practices. .
It is necessary to have sufficient financial resources for conducting evaluations of an acceptable quality. Ministries and Provincial Councils should make the necessary provision in the annual budget estimates for the conduct of evaluations. In addition to the financial support under the consolidated funds of the government, it is also necessary have built-in funds under foreign aided projects for the conduct of evaluations. It is necessary for the government to provide regular funding for post-evaluations, which cannot be generally be built into the foreign funded projects. Similarly financing arrangements should be made for institutional, policy and thematic evaluations.
The Ministry of Policy Development and Implementation will monitor this policy to ensure its success in meeting its intended objectives. The Secretary, Ministry of Policy Development and Implementation, in close consultation with professional CSOs such as the Sri Lanka Evaluation Association, Chamber of Commerce, Organization of Professional Association, will monitor the implementation of the policy on an annual basis. A modality, which would, inter alia, reflect the creation of the evaluation culture in the public sector would be developed for this purpose by the MPDI in consultation with the stakeholders.