Sign language is a mean of communication between the deaf community and hearing people, who use hand gestures, facial expressions, and body language to communicate. It has the same level of complexity as spoken language, but it does not employ the same sentence structure as English. The motions in sign language are made up of a range of distinct hand and finger articulations that are occasionally synchronized with the head, face, and body. Existing sign language recognition systems are mainly camera-based, which have fundamental limitations of poor lighting conditions, potential training challenges with longer video sequence data, and serious privacy concerns. This study presents a contact-less and privacy-preserving British sign language (BSL) Recognition system using Radar and deep learning algorithms, namely Inceptionv3, VGG16, and VGG19. The six most common emotions are considered, namely confused, depressed, happy, hate, lonely, and sad. The collected data is represented in the form of spectrograms. The deep learning models, InceptionV3, VGG19, and VGG16 then extract spatiotemporal features from the Spectrogram. Finally, the BSL emotions are accurately identified by classifying the Spectrograms, into the considered emotions signs. The simulation results demonstrate that a maximum classifying accuracy of 93.33% is obtained using VGG16.