Diogo Costa

and 5 more

This work advances the incorporation and cross-model deployment of multi-biogeochemistry and ecological simulations in existing process-based hydro-modelling tools. It aims to transform the current practice of water quality modelling as an isolated research effort into a more integrated and collaborative activity between science communities. Our approach, which we call “Open Water Quality” (OpenWQ), enables existing hydrological, hydrodynamic, and groundwater models to extend their capabilities to water quality simulations, which can be set up to examine a variety of water-related pollution problems. OpenWQ’s objective is to provide a flexible biogeochemical model representation that can be used to test different modelling hypotheses in a multi-disciplinary co-creative process. In this paper, we introduce the general approach used in OpenWQ. We detail aspects of its architecture that enable its coupling with existing models. This integration enables water quality models to benefit from advances made by hydrologic- and hydrodynamic-focused groups, strengthening collaboration between the hydrological, biogeochemistry, and soil science communities. We also detail innovative aspects of OpenWQ’s modules that enable biogeochemistry lab-like capabilities, where modellers can define the pollution problem(s) of interest, the appropriate complexity of the biogeochemistry routines, and test different modelling hypotheses. In a companion paper, we demonstrate how OpenWQ has been coupled to two hydrological models, the “Structure for Unifying Multiple Modelling Alternatives” (SUMMA) and the “Cold Regions Hydrological Model” (CRHM), demonstrating the innovative aspects of OpenWQ, the flexibility of its couplers and internal spatiotemporal data structures, and the versatile eco-modelling lab capabilities that can be used to study different pollution problems.
Characterizing climate change impacts on water resources typically relies on Global Climate Model (GCM) outputs that are bias-corrected using observational datasets. In this process, two pivotal decisions are (i) the Bias Correction Method (BCM) and (ii) how to handle the historically observed time series, which can be used as a continuous whole (i.e., without dividing it into sub-periods), or partitioned into monthly, seasonal (e.g., three months), or any other temporal stratification (TS). Here, we examine how the interplay between the choice of BCM, TS, and the raw GCM seasonality may affect historical portrayals and projected changes. To this end, we use outputs from 29 GCMs belonging to the CMIP6 under the Shared Socioeconomic Pathway 5–8.5 scenario, using seven BCMs and three TSs (entire period, seasonal, and monthly). The results show that the effectiveness of BCMs in removing biases can vary depending on the TS and climate indices analyzed. Further, the choice of BCM and TS may yield different projected change signals and seasonality (especially for precipitation), even for climate models with low bias and a reasonable representation of precipitation seasonality during a reference period. Because some BCMs may be computationally expensive, we recommend using the linear scaling method as a diagnostics tool to assess how the choice of TS may affect the projected precipitation seasonality of a specific GCM. More generally, the results presented here unveil trade-offs in the way BCMs are applied, regardless of the climate regime, urging the hydroclimate community for a careful implementation of these techniques.
Despite the proliferation of computer-based research on hydrology and water resources, such research is typically poorly reproducible. Published studies have low reproducibility due to incomplete availability of data and computer code, and a lack of documentation of workflow processes. This leads to a lack of transparency and efficiency because existing code can neither be quality controlled nor re-used. Given the commonalities between existing process-based hydrological models in terms of their required input data and preprocessing steps, open sharing of code can lead to large efficiency gains for the modeling community. Here we present a model configuration workflow that provides full reproducibility of the resulting model instantiations in a way that separates the model-agnostic preprocessing of specific datasets from the model-specific requirements that models impose on their input files. We use this workflow to create large-domain (global, continental) and local configurations of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model connected to the mizuRoute routing model. These examples show how a relatively complex model setup over a large domain can be organized in a reproducible and structured way that has the potential to accelerate advances in hydrologic modeling for the community as a whole. We provide a tentative blueprint of how community modeling initiatives can be built on top of workflows such as this. We term our workflow the “Community Workflows to Advance Reproducibility in Hydrologic Modeling’‘ (CWARHM; pronounced “swarm”).

Lina Stein

and 4 more

Hydroclimatic flood generating processes, such as excess rain, short rain, long rain, snowmelt and rain-on-snow, underpin our understanding of flood behaviour. Knowledge about flood generating processes helps to improve modelling decisions, flood frequency analysis, estimation of climate change impact on floods, etc. Yet, not much is known about how climate and catchment attributes influence the distribution of flood generating processes. With this study we aim to offer a comprehensive and structured approach to close this knowledge gap. We employ a large sample approach (671 catchment in the conterminous United States) and test attribute influence on flood processes with two complementary approaches: firstly, a data-based approach which compares attribute probability distributions of different flood processes, and secondly, a random forest model in combination with an interpretable machine learning approach (accumulated local effects). This machine learning technique is new to hydrology, and it overcomes a significant obstacle in many statistical methods, the confounding effect of correlated catchment attributes. As expected, we find climate attributes (fraction of snow, aridity, precipitation seasonality and mean precipitation) to be most influential on flood process distribution. However, attribute influence varies both with process and climate type. We also find that flood processes can be predicted for ungauged catchments with relatively high accuracy (R2 between 0.45 and 0.9). The implication of these findings is that flood processes should be taken into account for future climate change impact studies, as impact will vary between processes.

Raymond Spiteri

and 3 more

The next generation of Earth System models promisesunprecedented predictive power through the application of improvedphysical representations, data collection, and high-performancecomputing. A key component to the accuracy, efficiency, and robustnessof the Earth System simulations is the time integration ofdifferential equations describing the physical processes. Manyexisting Earth System models are simulated using low-order,constant-stepsize time-integration methods with no error control,opening them up to being inaccurate, inefficient, or require aninfeasible amount of manual tweaking when run over multipleheterogeneous domains or scales. We have implemented the variable-stepize, variable-order differentialequation solver SUNDIALS as the time integrator within the Structurefor Unifying Multiple Modelling Alternatives (SUMMA) modelframework. The model equations in SUMMA were modified and augmented toexpress conservation of mass and enthalpy. Water and energy balanceerrors were tracked and kept below a strict tolerance. The resultingSUMMA-SUNDIALS software was successfully run in a fully automatedfashion to simulate hydrological processes on the North Americancontinent, sub-divided into over 500,000 catchments. We compared the performance of SUMMA-SUNDIALS with a version (calledSUMMA-BE) that used the backward Euler method with a fixed stepsize asthe time-integration method. We find that SUMMA-BE required two ordersof magnitude more CPU time to produce solutions of comparable accuracyto SUMMA-SUNDIALS. Solutions obtained with SUMMA-BE in a similar orshorter amount of CPU time than SUMMA-SUNDIALS often contained largediscrepancies. We conclude that sufficient accuracy, efficiency, and robustness ofnext-generation Earth System model simulations can realistically onlybe obtained through the use of adaptive solvers. Furthermore, wesuggest simulations produced with low-order, constant-stepsizesolvers deserve more scrutiny in terms of their accuracy.