Although deep learning achieves state-of-the-art performance in tasks like vulnerability detection and classification, they have drawbacks. A significant disadvantage of deep learning methods is their inexplicability. Many deep learning models, especially sequential models like long short-term memories and gated recurrent units, operate as black-boxes. The outputs of these black-box models are uninterpretable by security analysts and software developers. This inexplicability of deep learning models hinders their acceptance in enterprises. It also prevents the knowledgeable system experts from removing the spurious correlations that the model might have learned. Thus, it is highly desirable to have explainable deep learning models to promote their acceptance in real-world systems. Another major drawback of deep learning is adversarial machine learning. Cyber attackers can utilize adversarial machine learning to fool the deep learning-based cyber-attack detectors. Prior research has shown that cyber-defenders can use explainable artificial intelligence to defend against adversarial machine learning attacks. Therefore, adding explainability to deep learning models for cyber-security is highly desirable. This article proposes a method that enhances the explainability of system-call sequence analysis-based vulnerability detection. Our method can potentially pinpoint the precise instruction calls that have triggered the vulnerability. This insight is valuable for the security analyst as he can now evaluate the sequence of system calls more efficiently.