Radars are expected to become the main sensors in various civilian applications, ranging from health-care monitoring to autonomous driving. Their success is mainly due to the availability of both low cost integrated devices, equipped with compact antenna arrays, and computationally efficient signal processing techniques. An increasingly important role in the field of radar signal processing is played by machine learning and deep learning techniques. Their use has been first taken into consideration in human gesture and motion recognition, and in various healthcare applications. More recently, their exploitation in object detection and localization has been also investigated. The research work accomplished in these areas has raised various technical problems that need to be carefully addressed before adopting the above mentioned techniques in real world radar systems. In this manuscript, a comprehensive overview of the machine learning and deep learning techniques currently being considered for their use in radar systems is provided. Moreover, some relevant open problems and current trends in this research area are analysed. Finally, various numerical results, based on both synthetically generated and experimental datasets, and referring to two different applications are illustrated. These allow readers to assess the efficacy of specific methods and to compare them in terms of accuracy and computational effort.