Autonomous rehabilitation support solutions, such as virtual coaches, should provide real-time feedback to improve motor functions and maintain patient engagement. However, fully annotated dataset collection for real-time exercise assessment is time-consuming and costly, posing a barrier to evaluating proposed methods. In this work, we present a novel framework that learns a frame-level classifier for real-time video assessment of compensatory motions in stroke rehabilitation exercises using weakly annotated videos. We consider three approaches: 1) a baseline approach that uses a source dataset to train a frame-level classifier, 2) a transfer learning approach that uses target dataset video-level labels and parameters learned from the source dataset with frame-level labels, and 3) a semi-supervised approach that leverages a target dataset video-level labels and a small set of frame-level labels. We intend to generalize to a weakly labeled target dataset with new exercises and patients. To validate the approach, we use two datasets with labels on compensatory motions: TULE, an existing video and frame-level labeled dataset of 15 post-stroke patients and three exercises, and SERE, a new dataset of 20 post-stroke patients and five exercises, created by the authors, with video-level labels and a small amount of frame-level labels. We show that a frame-level classifier trained on TULE does not generalize well on SERE (f1 = 72.87%), but our semi-supervised and transfer learning approaches achieve, respectively, f1 = 78.93% and f1 = 80.47%. Thus, the proposed approach can simplify the customization of virtual coaches to new patients and exercises with low data annotation efforts.