Popis: |
John Mayne rounds out the discussion about the gap between the demand and supply of evaluation information by suggesting that the wrong evaluation questions are often being asked in impact evaluations. In a world where interventions are increasingly complex, multi-faceted, involve different actors and take a long time to lead to observable outcomes and impact, evaluations that attempt to attribute results to any one set of factors are unlikely to produce meaningful results. Such an approach, as suggested in other chapters, may result in simplistic management toward milestones, rather than adaptive management to improve program design and implementation (and ultimately impact). It would be more productive, he argues, for commissioners of evaluations to recognize the inherent difficulties in securing answers to essentially useless questions about impact, and think instead of questions such as: did the intervention contribute to observed impacts; how and why did the intervention make that contribution; what other causal factors were at play; what was the relative importance of the various causal factors; are the results achieved sustainable; will the intervention work elsewhere; can it be scaled up; what are lessons learned; and what is the likely future impact of the intervention? Only by asking meaningful evaluation questions such as these, can commissioners expect to get meaningful answers. You tend to get what you ask for. |