Every now and then, we witness significant improvements in the performance of Deep Learning models. A typical cycle of improvement involves enhanced accuracy followed by reduced computing time. As algorithms get better at their job, it is worthwhile to try to evaluate their performance on problems that are affected by them. Computationally intense problems, such as object detection for Computer Aided Laparoscopy (CAL), can benefit from such improvements in such technologies. Recently a new set of variants of You Look Only Once (YOLO) models based on Neural Architecture Search (NAS) technique have been released. Deci, the enterprise behind this new development, touts a much better performance both in terms of accuracy as well as computational efficiency. In this paper, we have analyzed the performance YOLO-NAS on a well-known benchmark dataset related to CAL. We found that the performance of all the NAS-based YOLO was inferior as compared to other State-of-the-Art (SoTA) YOLO models. We compare our results against the YOLOv7 model too.