In this study, a robust method for 3D pose estimation of immature green apples (fruitlets) in commercial orchards was developed, utilizing the YOLO11 object pose detection model alongside Vision Transformers (ViT) for depth estimation. For object detection and pose estimation, performance comparisons of YOLO11 (YOLO11n, YOLO11s, YOLO11m, YOLO11l and YOLO11x) and YOLOv8 (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l and YOLOv8x) were made under identical hyperparameter settings among the all configurations. Likewise, for RGB to RGB-D mapping, Dense Prediction Transformer (DPT) and Depth Anything V2 were investigated. It was observed that YOLO11n surpassed all configurations of YOLO11 and YOLOv8 in terms of box precision and pose precision, achieving scores of 0.91 and 0.915, respectively. Conversely, YOLOv8n exhibited the highest box and pose recall scores of 0.905 and 0.925, respectively. Regarding the mean average precision at 50% intersection over union (mAP@50), YOLO11s led all configurations with a box mAP@50 score of 0.94, while YOLOv8n achieved the highest pose mAP@50 score of 0.96. In terms of image processing speed, YOLO11n outperformed all configurations with an impressive inference speed of 2.7 ms, significantly faster than the quickest YOLOv8 configuration, YOLOv8n, which processed images in 7.8 ms. This demonstrates a substantial improvement in inference speed over previous iterations, particularly evident when comparing YOLO11n and YOLOv8n. Subsequent integration of ViTs for the green fruit's pose depth estimation revealed that Depth Anything V2 outperformed Dense Prediction Transformer in 3D pose length validation, achieving the lowest Root Mean Square Error (RMSE) of 1.52 and Mean Absolute Error (MAE) of 1.28, demonstrating exceptional precision in estimating immature green fruit lengths. Following this, the DPT showed notable accuracy improvements with a RMSE of 3.29 and an MAE of 2.62. In contrast, measurements derived from Intel RealSense point clouds exhibited the highest discrepancies from the ground truth, with a RMSE of 9.98 and an MAE of 7.74. These findings emphasize the effectiveness of YOLO11 in detecting and estimating the pose of immature green fruits, illustrating how Vision Transformers like Depth Anything V2 adeptly convert RGB images into RGB-D data, thus enhancing the precision and computational requirement of 3D pose estimations for future robotic thinning applications in commercial orchards.