Theory based evaluation
What is it?
Theory-based evaluation has similarities to the LogFrame approach but allows a much more in-depth understanding of the workings of a program or activity—the “program theory” or “program logic.” In particular, it need not assume simple linear cause-andeffect relationships. For example, the success of a government program to improve literacy levels by increasing the number of teachers might depend on a large number of factors. These include, among others, availability of classrooms and textbooks, the likely reactions of parents, school principals and schoolchildren, the skills and morale of teachers, the districts in which the extra teachers are to be located, the reliability of government funding, and so on. By mapping out the determining or causal factors judged important for success, and how they might interact, it can then be decided which steps should be monitored as the program develops, to see how well they are in fact borne out. This allows the critical success factors to be identified. And where the data show these factors have not been achieved, a reasonable conclusion is that the program is less likely to be successful in achieving its objectives.
What can we use it for?
- Mapping design of complex activities.
- Improving planning and management.
- Provides early feedback about what is or is not working, and why.
- Allows early correction of problems as soon as they emerge.
- Assists identification of unintended side-effects of the program.
- Helps in prioritizing which issues to investigate in greater depth, perhaps using
more focused data collection or more sophisticated M&E techniques.
- Provides basis to assess the likely impacts of programs.
- Can easily become overly complex if the scale of activities is large or if an exhaustive
list of factors and assumptions is assembled.
- Stakeholders might disagree about which determining factors they judge important,
which can be time-consuming to address.
- Medium—depends on the depth of analysis and especially the depth of data collection
undertaken to investigate the workings of the program.
Minimum 3–5 days training for facilitators.
Can vary greatly, depending on the depth of the analysis, the duration of the program or activity, and the depth of the M&E work undertaken.