All rainfall-runoff models tend to degrade in performance when used in arid basins, as compared to their performance in basins with plenty of precipitation with respect to actual evapotranspiration. Neural networks are not exempt. Even though deep learning models such as LSTM provide superior predictive performance of streamflow in arid regions, as compared to their conceptual counterparts, performance degradation is still apparent as the aridity index increases (models perform worse when used in more arid basins). Physically, runoff generation in arid regions requires a critical mass of precipitation to overcome many hydrologic processes to eventually trigger overland flow. Conceptually this occurs when suitable antecedent soil moisture conditions match with suitable atmospheric conditions and land surface energy flux conditions. The alignment of these conditions causes a spontaneous shift in the hydrologic phase from initial abstraction to runoff. Runoff then persists until another spontaneous alignment of conditions shifts the hydrologic phase from runoff back to abstraction. We present evidence that the reason poor model performance under these scenarios is not model structure, but the inherent sensitivity to spontaneous synchronization of soil, atmospheric and land surface energy conditions. Both conceptual and deep learning models demonstrate these non-reciprocal phase transitions dynamically [1], but fail to calibrate correctly to these conditions due to their infrequent recurrence in the hydrography relative to their spontaneity. Deep learning models in particular contain sufficient dynamic complexity to represent this behavior well [2], but perhaps a rethinking or model training for representing these conditions is necessary. Finally will test the sensitivity of model training/calibration under hydrologic phase shifts with respect to data disinformation in these regions [3].