The tracheotomy procedure is crucial for situations involving intubation, airway blockages, or neck injuries, requiring precise incision placement to minimize risks and ensure effectiveness. Traditional methods involve palpating neck landmarks, but challenges arise in scenarios like teleoperation or critical care. Recent advancements in Augmented Reality (AR) and robotic-assisted surgery (RAS) offer promising solutions to enhance procedural safety and accuracy. In our study, we aim to explore the utilization of AR guidance in assisting tracheostomy incision localization. We employ a handheld ultrasound (US) probe to acquire preoperative anatomical data from a larynx phantom and convert ultrasound data into visual feedback, thus employing a multimodal approach that integrates US information into the visual domain. By marking the region of interest (ROI) on the laryngeal phantom model and importing it into the Hololens 2 device, we achieve visual guidance for precise incision insertion within the ROI. Additionally, we conducted laser localization comparison experiments, comparing procedures performed with and without AR glasses. AR-guided incision localization exhibited impressive performance metrics with a high precision of 0.932 for the cricothyrotomy scene and 0.938 for standard tracheotomy while demonstrating minimal mean central positioning errors of 0.301 mm and 0.236 mm, respectively. The results demonstrate that AR guidance enables surgeons to locate the corresponding laryngeal area touchlessly, efficiently, and accurately, thereby facilitating the progress of robotic-assisted tracheotomy.