Spiking neural networks (SNNs) are emerging as a promising alternative to traditional artificial neural networks (ANNs), offering advantages such as lower power consumption and biological interpretability. Despite recent progress in training SNNs and their performance in computer vision tasks, there remains a question of SNN robustness to corrupted images in real-world scenarios. To address this problem, we introduce CIFAR10-C and IMAGENET-C datasets from the ANN field as benchmarks and further propose novel methods to improve SNN corruption robustness. Specifically, we propose a retina-like coding to simulate dynamic human visual perception, providing a foundation for extracting robust features through varied temporal input. Meanwhile, we introduce a brain-inspired memorybased spiking neuron (MSN) that integrates memory units to learn robust features, along with a parallel version (pMSN) to facilitate parallel computing and achieve superior performance. Experimental results demonstrate that our method improves SNN recognition accuracy and robustness, achieving average accuracies of 87.04% on the CIFAR10-C dataset and 40.37% on the IMAGENET-C dataset, surpassing the state-of-the-art SNN method's 85.95% and 39.11%, respectively. These findings highlight the potential of our approach to enhance the robustness of SNNs in real-world scenarios. Our codes will be released.