XAI (eXplainable AI) has become a pivotal area of research with the advancement of deep learning (DL) technologies and applications. Post-hoc explanation methods interpret deep learning predictions by uncovering the significance of input features, while visualization tools can contribute to a deep understanding of AI model reasoning based on these methods. In this paper, we survey a broad spectrum of post-hoc explanation methods methods and the visual analytics work based on them. First, we categorize the computational methods into four main types: perturbation-based, gradient-based, decomposition-based, and concept-based. While the first three focus on attributing the model's output to specific regions of the input image, concept-based methods provide global explanations by mapping human-understandable concepts to high-level features. Then, we examine the methodologies, features, strengths, and limitations of each approach. Moreover, we review existing visualization work based on these computational methods. Finally, we discuss further research challenges and opportunities for XAI visualization with post-hoc explanation.