Food object recognition plays a crucial role in several healthcare applications. Nevertheless, it presents significant challenges that hinder its progress compared to other food computing areas. The challenges include the occlusion of ingredients as well as long-tailed and fine-grained data distributions. In this work, we propose F2-Net, a Feature Finding Network, aimed specifically at tackling these food recognition challenges by improving the feature learning capabilities of food recognition models. Specifically, we improve vanilla R-CNN architecture by tailoring it for food recognition. We incorporate state-of-the-art components addressing long-tailed classification, localization, segmentation, and improving the representation ability of the base network. Secondly, we design an efficient multi-task framework for fine-grained food recognition, which exploits the lexical similarity of dishes during training to improve the discriminative ability of the network. Finally, we include a Graph Confidence Propagation (GCP) module based on graph neural networks to aggregate the information of overlapping detections and refine the final prediction of the network (prior to non-maximum suppression application). Extensive analysis and ablations of different components of our proposed F2-Net highlight that it successfully addresses the targeted problems and leads to noticeable gains in performance. Remarkably, the proposed method achieves competitive results and outperforms current state-of-the-art methods in three public food benchmarks: UECFood-256, AiCrowd Food Challenge 2022, and UECFood-100 segmented.