Semantic Segmentation with Deep Convolutional Neural Networks for
Automated Dust Detection in Goes-R Satellite Imagery
Abstract
Airborne dust, including Dust storms and weaker dust traces, can have
deleterious and hazardous effects on human health, agriculture, solar
power generation, and aviation. Although earth observing satellites are
extremely useful in monitoring dust using visible and infrared imagery,
dust is often difficult to visually identify in single band imagery due
to its similarities to clouds, smoke, and underlying surfaces.
Furthermore, night-time dust detection is a particularly difficult
problem, since radiative properties of dust mimic those of the cooling,
underlying surface. The creation of false-color red-green-blue (RGB)
composite imagery, specifically the EUMETSAT Dust RGB, was designed to
enhance dust detection through the combination of single bands and band
differences into a single composite image. However, dust is still often
difficult to identify in night-time imagery even by experts. We
developed a Deep Learning, UNET image segmentation model to identify
airborne dust at night leveraging six GOES-16 infrared bands, with a
focus on infrared and water vapor bands.The UNET model architecture is
an encoder-decoder Convolutional Neural Network that does not require
large amounts of training data, localizes and contextualized image data
for precise segmentation, and provides fast training time for high
accuracy pixel level prediction. This presentation highlights collection
of the training database, development of the model, and preliminary
model validation. With further model development, validation, and
testing in a real-time context, probability-based dust prediction could
alert weather forecasters, emergency managers, and citizens to the
location and extent of impending dust storms.