Prashant Kumar Rai

and 2 more

Traditional radar perception often rely on point clouds derived from radar heatmap using CFAR filtering, which can result in the loss of valuable information, especially weaker signals crucial for accurate perception. To address this, we present a novel approach for representation learning directly from pre-CFAR heatmaps, specifically for place recognition using a high-resolution MIMO radar sensor. By avoiding CFAR filtering, our method preserves richer contextual data, capturing finer details essential for identifying and matching distinctive features across locations. Pre-CFAR heatmaps, however, contain inherent noise and clutter, complicating their application in radar perception tasks. To overcome this, we propose a self-supervised network that learns robust latent features from noisy heatmaps. The architecture consists of two identical U-Net encoders that extract features from the pair of radar scans, which are then processed by a transformer encoder to estimate ego-motion. Ground truth ego-motion trajectories guide the network training using a weighted mean-square error loss. The latent feature representations from the trained encoders are used to create a database of feature vectors for previously visited locations. During runtime, for place recognition and loop closure detection, cosine similarity is applied to query scan feature representation and the database to find the closest matches. We also introduce data augmentation techniques to handle limited training data, enhancing the model’s generalization capability. Our approach, tested on the publicly available ColoRadar dataset and our own, outperforms existing methods, showing significant improvements in place recognition accuracy, particularly in noisy and cluttered environments.