This paper introduces a semantic communications architecture that is focused on the transmission of american sign language (ASL). Initially, a novel system model for image-based semantic communications is presented, which utilizes a variant of the quadrature amplitude modulation (QAM) scheme, named 24-QAM. This modulation scheme is derived from the original 32-QAM constellation by removing 8 peripheral symbols and is proven capable of attaining superior error performance in ASL applications. Additionally, a semantic encoder based on a convolutional neural network (CNN) which effectively utilizes the ASL alphabet is presented. An original dataset is created by superimposing redgreen- blue landmarks and key-points on top of the captured images; hence, enhancing the representation of hand posture. Finally, the training, testing, and communication performance of the proposed system is quantified through numerical results that highlight the achievable gains and trigger insightful discussions.