Presenter/Author Information

Andrew D. Gronewold
Ibrahim M. Alameddine

Keywords

rainfall-runoff models, ungauged basins, ihacres, data variability, bayesian

Start Date

1-7-2010 12:00 AM

Abstract

Explicitly acknowledging uncertainty and variability in model-based hydrological forecasts is a challenging task. Many basins are either ungauged, are undergoing rapid land use change, or are in regions expected to experience significant climate change. These factors, in addition to uncertainty in monitoring data and model structure, collectively contribute to discrepancies between model predictions and observations. Few hydrological modeling studies, however, routinely quantify data uncertainty. Furthermore, few studies compare model forecasts to observations while considering intrinsic uncertainty in the model itself. To bridge this research gap, we test a series of rainfall-runoff models within gauged and ungauged basins in Eastern North Carolina (US). In the model calibration phase, we propagate data uncertainty into model forecasts within a Bayesian framework. We then assess model suitability by examining the distribution of Bayesian posterior p-values (defined as the model-derived probability of a flow measurement as or more extreme than that observed). Evaluating model performance in this way helps identify potential sources of model bias and error, and clearly demonstrates the magnitude of those errors relative to the various potential sources of variability and uncertainty in the model forecast.

COinS
 
Jul 1st, 12:00 AM

Propagating Data Uncertainty and Variability into Flow Predictions in Ungauged Basins

Explicitly acknowledging uncertainty and variability in model-based hydrological forecasts is a challenging task. Many basins are either ungauged, are undergoing rapid land use change, or are in regions expected to experience significant climate change. These factors, in addition to uncertainty in monitoring data and model structure, collectively contribute to discrepancies between model predictions and observations. Few hydrological modeling studies, however, routinely quantify data uncertainty. Furthermore, few studies compare model forecasts to observations while considering intrinsic uncertainty in the model itself. To bridge this research gap, we test a series of rainfall-runoff models within gauged and ungauged basins in Eastern North Carolina (US). In the model calibration phase, we propagate data uncertainty into model forecasts within a Bayesian framework. We then assess model suitability by examining the distribution of Bayesian posterior p-values (defined as the model-derived probability of a flow measurement as or more extreme than that observed). Evaluating model performance in this way helps identify potential sources of model bias and error, and clearly demonstrates the magnitude of those errors relative to the various potential sources of variability and uncertainty in the model forecast.