Alessandro Carollo

and 4 more

Functional near-infrared spectroscopy (fNIRS) is a preferred neuroimaging technique for studies requiring high ecological validity, allowing participants greater freedom of movement. Despite its relative robustness against motion artifacts (MAs) compared to traditional neuroimaging methods, fNIRS still faces challenges in managing and correcting these artifacts. Many existing MA correction algorithms notably lack validation on real data with ground-truth movement information. In this work, we combine computer vision, ground-truth movement data, and fNIRS signals to preliminarily characterize the association between specific head movements and MAs. Fifteen participants (age = 22.27 ± 2.62 years) took part in a whole-head fNIRS study, performing controlled head movements along three main rotational axes. Movements were categorized by axis (vertical, frontal, sagittal), speed (fast, slow), and type (half, full, repeated rotation). Experimental sessions were video recorded and analyzed frame-by-frame using the SynergyNet deep neural network to compute head orientation angles. Maximal movement amplitude and speed were extracted from head orientation data, while spikes and baseline shifts were identified in the fNIRS signals. Results showed that head orientation and movement metrics extracted via computer vision closely aligned with participant instructions. Additionally, repeated and Up/Down movements tended to compromise fNIRS signal quality. The occipital and pre-occipital regions were particularly susceptible to MAs following Up/Down movements, while temporal regions were most affected by bendLeft/bendRight and Left/Right movements. These findings underscore the importance of cap adherence and fit in the relationship between movements and MAs. Overall, the work lays the foundation for an automated approach to developing and validating fNIRS MA correction algorithms.

Andrea Bizzego

and 5 more

Functional near-infrared spectroscopy (fNIRS) is beneficial for studying brain activity in naturalistic settings due to its tolerance for movement. However, residual motion artifacts still compromise fNIRS data quality and might lead to spurious results. Although some motion artifact correction algorithms have been proposed in the literature, their development and accurate evaluation have been challenged by the lack of ground truth information. This is because ground truth information is time- and labor-intensive to manually annotate. This work investigates the feasibility and reliability of a deep learning computer vision (CV) approach for automated detection and annotation of head movements from video recordings. Fifteen participants performed controlled head movements across three main rotational axes (head up/down, head left/right, bend left/right) at two speeds (fast and slow), and in different ways (half, complete, repeated movement). Sessions were video recorded and head movement information was obtained using a CV approach. A 1-dimensional UNet model (1D-UNet) that detects head movements from head orientation signals extracted via a pre-trained model (SynergyNet) was implemented. Movements were manually annotated as a ground truth for model evaluation. The model’s performance was evaluated using the Jaccard index. The model showed comparable performance between train and test sets (J train = 0.954; J test = 0.865). It moreover demonstrated good and consistent performance at annotating movement across movement axes and speeds. However, performance varied by movement type, with best results for repeated (J test = 0.941), followed by complete (J test = 0.872), and then half movements (J test = 0.826). This study suggests that the proposed CV approach provides accurate ground truth movement information. Future research can rely on this CV approach to evaluate and improve fNIRS motion artifact correction algorithms.