Functional near-infrared spectroscopy (fNIRS) is a preferred neuroimaging technique for studies requiring high ecological validity, allowing participants greater freedom of movement. Despite its relative robustness against motion artifacts (MAs) compared to traditional neuroimaging methods, fNIRS still faces challenges in managing and correcting these artifacts. Many existing MA correction algorithms notably lack validation on real data with ground-truth movement information. In this work, we combine computer vision, ground-truth movement data, and fNIRS signals to preliminarily characterize the association between specific head movements and MAs. Fifteen participants (age = 22.27 ± 2.62 years) took part in a whole-head fNIRS study, performing controlled head movements along three main rotational axes. Movements were categorized by axis (vertical, frontal, sagittal), speed (fast, slow), and type (half, full, repeated rotation). Experimental sessions were video recorded and analyzed frame-by-frame using the SynergyNet deep neural network to compute head orientation angles. Maximal movement amplitude and speed were extracted from head orientation data, while spikes and baseline shifts were identified in the fNIRS signals. Results showed that head orientation and movement metrics extracted via computer vision closely aligned with participant instructions. Additionally, repeated and Up/Down movements tended to compromise fNIRS signal quality. The occipital and pre-occipital regions were particularly susceptible to MAs following Up/Down movements, while temporal regions were most affected by bendLeft/bendRight and Left/Right movements. These findings underscore the importance of cap adherence and fit in the relationship between movements and MAs. Overall, the work lays the foundation for an automated approach to developing and validating fNIRS MA correction algorithms.