Despite recent progress, the susceptibility of machine learning models to adversarial examples remains a challenge —which calls for rethinking the defense strategy. In this paper, we investigate the cause-effect link between adversarial examples and the out-of-distribution (OOD) problem. To that end, we propose Out2In, an OOD generalization method that is resilient to not only adversarial but also natural distribution shifts. Through an OOD to in-distribution mapping intuition that leverages image-to-image translation, Out2In translates OOD inputs to the data distribution used to train/test the model. First, we experimentally confirm that the adversarial examples problem is related to the wider OOD generalization problem. Then, through extensive experiments on three benchmark image datasets (MNIST, CIFAR10, and ImageNet), we show that Out2In consistently improves robustness to OOD adversarial inputs and outperforms state-of-the-art defenses by a significant margin, while preserving the exact accuracy on benign (in-distribution) data. Furthermore, it generalizes on naturally OOD inputs such as darker or sharper images