Fluorescence microscopy (FM) is an imaging technique with many important applications in biomedical sciences. After FM images are acquired, segmentation is often the first step in their quantitative analysis. Although deep neural networks (DNNs) have become the state-of-the-art tools for segmentation, it is known that their performance may collapse on natural images under certain corruptions or adversarial attacks. This poses serious risks to their deployment in real-world applications. Although various assays have been developed to benchmark the robustness of different DNN models in semantic segmentation of natural images, such assays remain lacking for FM images. So far, robustness of DNN models in sematic segmentation of FM images remains to be characterized. In this study, we have developed an assay to address this deficiency. At the core of the assay is a method we have developed to synthesize realistic FM images with precisely controlled forms and levels of degradations. Using this assay, we examine the robustness of DNN models against corruptions and adversarial attacks of FM images. We find that models with good robustness on natural images may perform poorly on FM images. We also find new robustness properties of DNN models and new connections between their corruption robustness and adversarial robustness. Based on comprehensive comparison of eight representative models, we make specific recommendations on which models to choose and how to design robust models for segmentation of FM images.