Hate speech targets different social groups such as race and gender, and poses a significant threat to social harmony. Researchers are increasingly motivated to devise efficient techniques to improve automatic hate speech detection on social media platforms. However, current models are usually evaluated without considering hate speech targets and fail when the targets are unseen in the training data. In this study, we examine target (domain) shifts of hate speech and propose, \emph{Tad}, an adaptation framework for neural models that adopts domain-aware networks to improve cross-domain hate speech detection. Tad features a hate knowledge lexicon infusion network, a domain-specific network, and a weighting network. We demonstrate that incorporating Tad improves the performance of leading neural models in hate speech detection when tested on unseen domains. Specifically, Tad yields improvements of up to 8$.1\%$ and an average of $2.4\%$ in macro F1-scores. Moreover, we identify data quality and quantity as vital factors to address performance gaps between models tested on seen and unseen domains. Our results reveal that excessive knowledge infusion may result in a decrease in performance such as for \textsl{Religion}. In addition, we find trade-offs in cross-domain hate speech detection. For example, weighted loss for heavily imbalanced data generally improves performance.