This paper explores sign language, a natural mode of communication for the deaf community. However, sign language often remains challenging to learn and creates communication barriers between the deaf and the hearing. This work addresses this issue by assessing the performance of the state-of-the-art convolutional model, ConvNeXt, on the novel task of sign language recognition. The research yields compelling results, with accuracies surpassing 99% and fast training times that rival advanced Vision Transformers (ViTs). The experiments are rigorously evaluated using the publicly available Sign-Language-MNIST Dataset, an established benchmark for sign language research. A comparison of the generalizability of ConvNeXt and ViT is further undertaken using the publicly available Indian Sign Language Dataset which shows ViTs generalize better by ~3% in sign language recognition tasks. The findings of this study contribute to the broader goal of improving communication for the deaf community while also highlighting the capability of carefully constructed lightweight convolutional models that have recently fallen out of favour.