Over the last decade, deep learning applications in biomedical research have exploded, demonstrating their ability to often outperform previous machine learning approaches in various tasks. However, training deep learning models for biomedical applications requires large amounts of data annotated by experts, whose collection is often time- and cost- prohibitive. Self-Supervised Learning (SSL) has emerged as a prominent solution for such problem, as it allows to learn powerful representations from vast unlabeled data by producing supervisory signals directly from the data. The high amount of recent works employing the self-supervised learning paradigm for the analysis of biomedical signals (biosignals) can make it difficult for researchers to have a complete picture of the current research state. Therefore, this paper aims at outlining and clarifying the state-of-the-art in the domain. The article: briefly summarizes the nature and acquisition modality of the main biosignals; introduces the self-supervised learning method, focusing on the different pretraining strategies; provides a concise overview of the works employing SSL for the analysis of different types of biosignals; provides an overall analysis of critical aspects to consider when employing SSL to biosignals, also highlighting current open challenges. The analysis of the scientific literature highlights the importance of SSL, confirming its potential to improve models’ performance and robustness, and to promote the integration of deep learning into clinical tasks.