Edge case or long-tail detection in computer vision presents a significant challenge due to the inherent data imbalance and scarcity of rare event samples. Traditional convolutional neural networks (CNNs) often struggle to generalize well to such infrequent instances. In this paper, we propose a novel Transformer-based framework that effectively addresses the long-tail distribution problem by leveraging attention mechanisms to focus on subtle and rare patterns in images. Our approach incorporates a specialized loss function and data augmentation techniques to further enhance performance on edge cases. We evaluate our method on several benchmark datasets and demonstrate that it outperforms stateof-the-art models in detecting rare events, highlighting the potential of Transformers in tackling the long-tail problem in computer vision.