Object recognition is a machine learning problem that involves the correct classification and localization of objects in an image. Object recognition has found wide applications in Industry 4.0, surveillance, and autonomous driving. Prominent datasets such as Pascal VOC, ImageNet, and MS COCO have further spurred advances in object recognition. However, the images in these datasets were captured using ordinary perspective lenses. Unlike ordinary cameras, fisheye or hemispherical cameras have a wide FoV reaching 180 degrees. Due to this property, they can acquire wide panoramic images, thus, capturing more objects in a scene as compared to ordinary perspective lenses. In this work, we provide the first benchmark testing dataset of real fisheye images containing commonly-occurring objects in the human environment obtained with hemispherical cameras. The dataset comprises 1,000 images containing 39 classes with a total number of 14,218 object instances. We trained the YOLOv7 model on the original MS COCO and transformed FisheyeCOCO datasets and tested the obtained models with our dataset. The results indicate that training the model on the combination of original and transformed datasets improves the performance by 2.5% compared to using these datasets separately.