Convolutional neural networks (CNNs) have achieved great success in many computer vision tasks. Recently, attention mechanisms have proven to be critical in lifting the performance of deep convolutional neural networks (CNNs). In this work, we investigate effective attention mechanisms and propose a novel network unit, which we call the “Graph Channel Attention” (GCA) block, that dynamically encourages the communication across channels by explicitly modelling the cross-channel interaction. The unit is designed with a graph channel attention mechanism in two steps, where the first step captures position information from different spatial locations through a position prior module and the second step enables channels to exchange information via a message exchange module. The proposed GCA block is efficient and can be easily plugged into existing deep neural networks. Extensive experiments show that the proposed method attains superior performance compared with other lightweight attention mechanisms on image classification, object detection, instance segmentation etc. In particular, the proposed method achieves a 4.12% top-1 accuracy improvement on ImageNet classification over a ResNet50 backbone. Furthermore, detailed analyses show that the proposed method could significantly reduce redundancy in features and learn more diverse feature representations.