It is tremendously costly to produce large volumes of datasets with extensive annotation, which object detection requires. In this paper, we provide a framework in which GAN and Knowledge Distillation can be effectively combined to train object detectors from lightly annotated data. Given a source domain that is fully annotated, a small (20%) annotated target domain, and a large non-annotated target domain, the proposed method uses GAN to generate images in the synthetic target domain and creates pseudo-labels with a teacher model. Training an object detector model with detection and distillation losses yielded an overall mean average precision (mAP) on the target domain of 60.2%, outperforming the baseline with only 20% annotations at 51.8% and closing the gap to the 100% annotation baseline at 62.8%. Source code is available upon request.