Jacqueline M Nugent

and 2 more

The cold point tropopause, the minimum temperature within the tropical upper troposphere-lower stratosphere region (UTLS), significantly impacts the Earth’s climate by influencing the amount of water vapor entering the lower stratosphere. Understanding which mechanisms are most important in setting the cold point temperature and height may help us better predict how it will change in a future warmed climate. The goal of this analysis is to evaluate two mechanisms that may influence the cold point – cold point-overshooting convection and the radiative lofting of thin cirrus near the cold point – by comparing 30-day global storm-resolving model (GSRM) simulations from the winter phase of the DYAMOND initiative to satellite observations. GSRMs have explicit deep convection and sufficiently fine grid spacings to simulate convective overshoots and UTLS cirrus, making them a promising tool for this purpose.   We find that the GSRMs reproduce the observed distribution of cold point-overshooting convection but do not simulate enough cirrus capable of radiative lofting near the cold point. Both the models and observations show a strong relationship between areas of frequent cold point overshoots and colder cold points, suggesting that cold point-overshooting convection has a notable influence on the mean cold point. However, we find little evidence that the radiative lofting of cold point cirrus substantially influences the cold point. Cold point-overshooting convection alone cannot explain all variations in the cold point across different GSRMs or regions; future studies using longer GSRM simulations that consider longer-term UTLS processes are needed to fully understand what sets the cold point.

W. Andre Perkins

and 3 more

We present a machine learning based emulator of a microphysics scheme for condensation and precipitation processes (Zhao-Carr) used operationally in a global atmospheric forecast model (FV3GFS). Our tailored emulator architecture achieves high skill (≥94%) in predicting condensate and precipitation amounts and maintains low global-average bias (≤4%) for 1 year of continuous simulation when replacing the Fortran scheme. The stability and success of this emulator stems from key design decisions. By separating the emulation of condensation and precipitation processes, we can better enforce physical priors such as mass conservation and locality of condensation, and the vertical dependence of precipitation falling downward, using specific network architectures. An activity classifier for condensation imitates the discrete-continuous nature of the Fortran microphysics outputs (i.e., tendencies are identically zero where the scheme is inactive, and condensate is zero where clouds are fully evaporated). A temperature-scaled conditional loss function ensures accurate condensate adjustments for a high dynamic range of cloud types (e.g., cold, low-condensate cirrus clouds or warm, condensate-rich clouds). Despite excellent overall performance, the emulator exhibits some deficiencies in the uppermost model levels, leading to biases in the stratosphere. The emulator also has short episodic skill dropouts in isolated grid columns and is computationally slower than the original Fortran scheme. Nonetheless, our challenges and strategies should be applicable to the emulation of other microphysical schemes. More broadly, our work demonstrates that with suitable physically motivated architectural choices, ML techniques can accurately emulate complex human-designed parameterizations of fast physical processes central to weather and climate models.
The use of machine learning (ML) for the online correction of coarse-resolution atmospheric models has proven effective in reducing biases in near-surface temperature and precipitation rate. However, this often introduces biases in the upper atmosphere and improvements are not always reliable across ML-corrective models trained with different random seeds. Furthermore, ML corrections can feed back on the baseline physics of the atmospheric model and produce profiles that are outside the distribution of samples used in training, leading to low confidence in the predicted corrections. This study introduces the use of a novelty detector to mask the predicted corrections when the atmospheric state is deemed out-of-sample. The novelty detector is trained on profiles of temperature and specific humidity in a semi-supervised fashion using samples from the coarsened reference fine-resolution simulation. Offline, the novelty detector determines more columns to be out-of-sample in simulations which are known, using simple metrics like mean bias, to drift further from the reference simulation. Without novelty detection, corrective ML leads to the development of undesirably large climate biases for some ML random seeds but not others. Novelty detection deems about 21% of columns to be novelties in year-long simulations. The spread in the root mean square error (RMSE) of time-mean spatial patterns of surface temperature and precipitation rate across a random seed ensemble is sharply reduced when using novelty detection. In particular, the random seed with the worst RMSE is improved by up to 60% (depending on the variable) while the best seed maintains its low RMSE.

Brian Henn

and 7 more

Coarse-grid weather and climate models rely particularly on parameterizations of cloud fields, and coarse-grained cloud fields from a fine-grid reference model are a natural target for a machine-learned parameterization. We machine-learn the coarsened-fine cloud properties as a function of coarse-grid model state in each grid cell of NOAA’s FV3GFS global atmosphere model with 200 km grid spacing, trained using a 3 km fine-grid reference simulation with a modified version of FV3GFS. The ML outputs are coarsened fine fractional cloud cover and liquid and ice cloud condensate mixing ratios, and the inputs are coarse model temperature, pressure, relative humidity, and ice cloud condensate. The predicted fields are skillful and unbiased, but somewhat under-dispersed, resulting in too many partially-cloudy model columns. When the predicted fields are applied diagnostically (offline) in FV3GFS’s radiation scheme, they lead to small biases in global-mean top-of-atmosphere (TOA) and surface radiative fluxes. An unbiased global mean TOA net radiative flux is obtained by setting to zero any predicted cloud with grid cell mean cloud fraction less than a threshold of 6.5%; this does not significantly degrade the ML prediction of cloud properties. The diagnostic, ML-derived radiative fluxes are  far more accurate than those obtained with the existing cloud parameterization in the nudged coarse-grid model, as they leverage the accuracy of the fine-grid reference simulation’s cloud properties. 

Ilai Guendelman

and 9 more

Recent advances have allowed for integration of global storm resolving models (GSRMs) to a timescale of several years. These short simulations are sufficient for studying characteristics and statistics of short- and small-scale phenomena; however, it is questionable what we can learn from these integrations about the large-scale climate response to perturbations. To address this question, we use the response of X-SHiELD (a GSRM) to uniform SST warming and CO$_2$ increase in a two-year integration and compare it to similar CMIP6 experiments. Specifically, we assess the statistical meaning of having two years in one model outside the spread of another model or model ensemble. This is of particular interest because X-SHiELD shows a distinct response of the global mean precipitation to uniform warming, and the northern hemisphere jet shift response to isolated CO$_2$ increase. We use the CMIP6 models to estimate the probability of two years in one model being more than one standard deviation away from another model (ensemble) mean, knowing the mean of two models. For example, if two years in one model are more than one standard deviation away from the other model’s mean, we find that the chances for these models’ means to be within one standard deviation are $\sim 25\%$. We find that for some large-scale metrics, there is an important base-state dependence that, when taken into account, can qualitatively change the interpretation of the results. We note that a year-to-year comparison is physically meaningful due to the use of prescribed sea-surface-temperature simulations.