A stochastic horizontal subgrid-scale mixing scheme is evaluated in ensemble simulations of a tropical oceanic deep convection case using a horizontal grid spacing (Δh) of 3 km. The stochastic scheme, which perturbs the horizontal mixing coefficient according to a prescribed spatiotemporal autocorrelation scale, is found to generally increase mesoscale organization and convective intensity relative to a non-stochastic control simulation. Perturbations applied at relatively short autocorrelation scales induce differences relative to the control that are more systematic than those from perturbations applied at relatively long scales that yield more variable outcomes. A simulation with mixing enhanced by a constant factor of 4 significantly increases mesoscale organization and convective intensity, while turning off horizontal subgrid-scale mixing decreases both. Total rainfall is modulated by a combination of mesoscale organization, areal coverage of convection, and convective intensity. The stochastic simulations tend to behave more similarly to the constant enhanced mixing simulation owing to greater impacts from enhanced mixing as compared to reduced mixing. The impacts of stochastic mixing are robust, ascertained by comparing the stochastic mixing ensembles with a non-stochastic mixing ensemble that has grid-scale noise added to the initial thermodynamic field. Compared to radar observations and a higher resolution Δh = 1 km simulation, stochastic mixing seemingly degrades the simulation performance. These results imply that stochastic mixing produces non-negligible impacts on convective system properties and evolution but does not lead to an improved representation of convective cloud characteristics in the case studied here.

Kara Diane Lamb

and 3 more

Representing cloud microphysical processes in large scale atmospheric models is challenging because many processes depend on the details of the droplet size distribution (DSD, the spectrum of droplets with different sizes in a cloud). While full or partial statistical moments of droplet size distributions are the typical basis set used in bulk models, prognostic moments are limited in their ability to represent microphysical processes across the range of conditions experienced in the atmosphere. Microphysical parameterizations employing prognostic moments are known to suffer from structural uncertainty in their representations of inherently higher dimensional cloud processes, which limit model fidelity and lead to forecasting errors. Here we investigate how data-driven reduced order modeling can be used to learn predictors for microphysical process rates in bulk microphysics schemes in an unsupervised manner from higher dimensional bin distributions, by simultaneously learning lower dimensional representations of droplet size distributions and predicting the evolution of the microphysical state of the system. Droplet collision-coalescence, the main process for generating warm rain, is estimated to have an intrinsic dimension of 3. This intrinsic dimension provides a lower limit on the number of degrees of freedom needed to accurately represent collision-coalescence in models. We demonstrate how deep learning based reduced-order modeling can be used to discover intrinsic coordinates describing the microphysical state of the system, where process rates such as collision-coalescence are globally linearized. These implicitly learned representations of the DSD retain more information about the DSD than typical moment-based representations.
Stratocumulus clouds, a key component of global climate, are sensitive to aerosol properties. Aerosol-cloud-precipitation interactions in these clouds influence their closed-to-open cell dynamical transition and hence cloud cover and radiative forcing. This study uses large-eddy simulations with Lagrangian super-particle and bin microphysics schemes to investigate impacts of aerosol scavenging and physical processing by clouds on drizzle initiation and the cellular transition process. The simulation using Lagrangian microphysics with explicit representation of cloud-borne aerosol and scavenging shows significant aerosol processing that impacts precipitation generation and consequently the closed-to-open cell transition. Sensitivity simulations using the bin scheme and their comparison with the Lagrangian microphysics simulation suggest that reduced aerosol concentration due to scavenging is a primary microphysical catalyst for enhanced precipitation using the Lagrangian scheme. However, changes in the aerosol distribution shape through processing also contribute appreciably to the differences in precipitation rate. Thus, both aerosol scavenging and processing drive earlier rain formation and the transition to open cells in the simulation with Lagrangian microphysics. This study also highlights a shortcoming of Eulerian bin microphysics producing smaller mean drop radius and cloud water mixing ratios owing to numerical diffusion. Initially larger mean radius and cloud mixing ratios using the Lagrangian scheme induce faster rain development compared to the bin scheme. A positive feedback in turn accelerates aerosol removal and further rain production using the Lagrangian scheme and, consequently, reduced cloud droplet number, increased mean size, and increased droplet spectral width.

Adele Igel

and 3 more

Warm rain collision coalescence has been persistently difficult to parameterize in bulk microphysics schemes. Here we use a flexible bulk microphysics scheme with bin scheme process parameterizations, called AMP, to investigate reasons for the difficulty. AMP is configured in a variety of ways to mimic bulk schemes and is compared to simulations with the bin scheme upon which AMP is built. We find that the biggest limitation in traditional bulk schemes is the use of separate cloud and rain categories. When the drop size distribution is instead represented by a continuous distribution with or without an explicit functional form, the simulation of cloud-to-rain conversion is substantially improved. We find that the use of an assumed double-mode gamma distribution and the choice of predicted distribution moments do somewhat influence the ability of AMP to simulate rain production, but much less than using a single liquid category compared to separate cloud and rain categories. Traditional two category configurations of AMP are always too slow in producing rain due to their struggle to capture the emergence of the rain mode. Single category configurations may produce rain either too slowly or too quickly, with too slow production more likely for initially narrow droplet size distributions. However, the average error magnitude is much smaller using a single category than two categories. Optimal moment combinations for the single category approach appear to be linked more to the information content they provide for constraining the size distributions than to their correlation with collision-coalescence rates.

Andrew Gettelman

and 6 more

Clouds are one of the most critical yet uncertain aspects of weather and climate prediction. The complex nature of sub-grid scale cloud processes makes traceable simulation of clouds across scales difficult (or impossible). Often models and measurements are used to develop empirical relationships for large-scale models to be computationally efficient. Machine learning provides another potential tool to improve our empirical parameterizations of clouds. To explore these opportunities, we replace the warm rain formation process in a General Circulation Model (GCM) with a detailed treatment from a bin microphysical model that causes a 400\% slowdown in the GCM. We analyze the changes in climate that result from the use of the bin microphysical calculation and find improvements in the rain onset and frequency of light rain compared to detailed models and observations. We also find a resulting change in the cloud feedback response of the model to warming, which will significantly impact the climate sensitivity. We then emulate this process with an emulator consisting of multiple neural networks that predict whether specific tendencies will be nonzero and the magnitude of the nonzero tendencies. We describe the risks of over-fitting, extrapolation, and linearization of a non-linear problem by using perfect model experiments with and without the emulator and show we can recover the solutions with the emulators in almost all respects, and recover nearly all the speed to get simulations that perform as the detailed model, but with the computational cost of the control simulation.