Keywords
dist, climate change, evaluation, outcomes, transdisciplinary
Start Date
1-7-2008 12:00 AM
Abstract
This paper continues a series of reflections on the challenges of developing and deploying decision and information systems (DIST) for environmental management. Our focus is on the additional challenges being posed by funders to evaluate the outcomes (changes in the world beyond the research institute). This is a significant raising of the bar for DIST, particularly when many still struggle to overcome the simple hurdle of being used at all. The paper reflects on the challenge of evaluating outcomes, placing it in the context of conventional analysis of DIST performance (for example validation, software testing and utility assessment). Several particular challenges are identified, the intangibility of many outcomes, the difficulty of receiving credit for post project outcomes, how to disentangle cause and effect for changes that occur, how to decide the relative importance of outcomes and finally a failing to recognise that despite best endeavours research may still be ignored. The paper then presents a simple evaluation process conducted as part of a transdisciplinary research project that uses model based indicators to communicate the consequences of climate change to land managers and generate discussion of likely adaptations to management. The outcomes of the evaluations were that the process provided new information and raised awareness of the specific research being carried out. The project was also successful in changing views on climate change for a majority of attendees; particularly where the existing levels of knowledge were limited. Yet despite the relative success of the evaluation process and the useful lessons learned, designing an evaluation process and interpreting the findings remains a serious challenge. The authors conclude by questioning whether outcome evaluation is a vital requirement for successful DIST development or does it generate expectations that cannot be met?
Raising the Bar – Is evaluating the outcomes of decision and information support tools a bridge too far?
This paper continues a series of reflections on the challenges of developing and deploying decision and information systems (DIST) for environmental management. Our focus is on the additional challenges being posed by funders to evaluate the outcomes (changes in the world beyond the research institute). This is a significant raising of the bar for DIST, particularly when many still struggle to overcome the simple hurdle of being used at all. The paper reflects on the challenge of evaluating outcomes, placing it in the context of conventional analysis of DIST performance (for example validation, software testing and utility assessment). Several particular challenges are identified, the intangibility of many outcomes, the difficulty of receiving credit for post project outcomes, how to disentangle cause and effect for changes that occur, how to decide the relative importance of outcomes and finally a failing to recognise that despite best endeavours research may still be ignored. The paper then presents a simple evaluation process conducted as part of a transdisciplinary research project that uses model based indicators to communicate the consequences of climate change to land managers and generate discussion of likely adaptations to management. The outcomes of the evaluations were that the process provided new information and raised awareness of the specific research being carried out. The project was also successful in changing views on climate change for a majority of attendees; particularly where the existing levels of knowledge were limited. Yet despite the relative success of the evaluation process and the useful lessons learned, designing an evaluation process and interpreting the findings remains a serious challenge. The authors conclude by questioning whether outcome evaluation is a vital requirement for successful DIST development or does it generate expectations that cannot be met?