Graph Convolutional Networks (GCNs) derive in spiration from recent advances in computer vision, by stacking layers of first-order filters followed by a nonlinear activation function to learn graph representations. Although GCNs have been shown to boost the performance of many network analysis tasks, they still face tremendous challenges in learning from Heterogeneous Information Networks (HINs), where relations play a decisive role in knowledge reasoning. In addition, there may exist multi-aspect representations for every entity in HINs, and a fifilter learned with respect to one aspect is not guaranteed to behave similarly when applied to another aspect. We address these challenges by proposing the Aspect-Aware Graph Attention Network (AGAT), a model that extends GCNs with learnable weights to incorporate entity and relational information. Instead of learning the best representation for an entity, AGAT is able to learn contextual entity representations, which is more useful in downstream tasks. Experiments on link prediction and semi-supervised classifification verify the effectiveness of our algorithm.