Wenchao Yue

and 6 more

Tracheotomy is commonly performed for patients needing prolonged intubation, airway obstruction, and neck injuries. Accurate placement of the incision and the tracheal window is paramount in order to avoid complications. Current surgical technique heavily relies on palpating cartilage landmarks on the neck to place the incision. In order to achieve the accelerated goals of the robot-assisted subtask in a tracheotomy, this paper proposes a novel autonomous palpation-based acquisition strategy-RASEC in the tracheal region, which can interactively determine the next best acquisition point in order to maximize the expected acquisition information and minimize the expected costs of palpation procedure. We employ a Gaussian Process (GP) to model the distribution of hardness and utilize anatomical information as a priori input to guide the point of palpation for medical robots. The dynamic tactile sensor based on the resonant frequency is introduced to measure tissue hardness in the tracheal region by millimeter-scale gentle contact to secure the interaction. To figure out the drawbacks that the existing kernel functions are not sufficiently optimal in this scenario, we investigate the kernel fusion method to blend the Squared Exponential (SE) kernel with Ornstein-Uhlenbeck (OU) kernel. Moreover, we further regularize the exploration and greed factors, and the tactile sensor's moving distance and the robotic base link's rotation angle during the incision localization process are considered new factors in the acquisition strategy. Simulation and physical phantom experiments are conducted for comparison with stateof-the-art GP-based exploration approaches. The results show that the sensor's moving distance was optimized to 53.1% and the rotation angle of the base was optimized to 75.2% of the previous values without sacrificing overall performance capabilities. The satisfying algorithmic index (average precision 0.932, average recall 0.973, average F1 score 0.952) with fewer central estimation distance errors (0.423 mm) and high resolution (1 mm) indicates the performance of the proposed RASEC in terms of exploration efficiency, cost awareness, and localization accuracy for incision localization and recommendation in real robot-assisted subtask in the tracheotomy procedure.

Wenchao Yue

and 4 more

The tracheotomy procedure is crucial for situations involving intubation, airway blockages, or neck injuries, requiring precise incision placement to minimize risks and ensure effectiveness. Traditional methods involve palpating neck landmarks, but challenges arise in scenarios like teleoperation or critical care. Recent advancements in Augmented Reality (AR) and robotic-assisted surgery (RAS) offer promising solutions to enhance procedural safety and accuracy. In our study, we aim to explore the utilization of AR guidance in assisting tracheostomy incision localization. We employ a handheld ultrasound (US) probe to acquire preoperative anatomical data from a larynx phantom and convert ultrasound data into visual feedback, thus employing a multimodal approach that integrates US information into the visual domain. By marking the region of interest (ROI) on the laryngeal phantom model and importing it into the Hololens 2 device, we achieve visual guidance for precise incision insertion within the ROI. Additionally, we conducted laser localization comparison experiments, comparing procedures performed with and without AR glasses. AR-guided incision localization exhibited impressive performance metrics with a high precision of 0.932 for the cricothyrotomy scene and 0.938 for standard tracheotomy while demonstrating minimal mean central positioning errors of 0.301 mm and 0.236 mm, respectively. The results demonstrate that AR guidance enables surgeons to locate the corresponding laryngeal area touchlessly, efficiently, and accurately, thereby facilitating the progress of robotic-assisted tracheotomy.