In the 5G/B5G network paradigms, intelligent medical devices known as the Internet of Medical Things (IoMT) have been used in the healthcare industry to monitor remote users’ health status, such as elderly monitoring, injuries, stress, and patients with chronic diseases. Since IoMT devices have limited resources, mobile edge computing (MEC) has been deployed in 5G networks to enable them to offload their tasks to the nearest computational servers for processing. However, when IoMTs are far from network coverage or the computational servers at the terrestrial MEC are overloaded/emergencies occur, these devices cannot access computing services, potentially risking the lives of patients. In this context, unmanned aerial vehicles (UAVs) are considered a prominent aerial connectivity solution for healthcare systems. In this paper, we propose a multi-agent federated reinforcement learning (MAFRL)-based resource allocation framework for a multi-UAV-enabled healthcare system. We formulate the computation offloading and resource allocation problems as a Markov decision process game in federated learning with multiple participants. Then, we propose a MAFRL algorithm to solve the formulated problem, minimize latency and energy consumption, and ensure the quality of service. Finally, extensive simulation results on a real-world heartbeat dataset prove that the proposed MAFRL algorithm significantly minimizes the cost, preserves privacy, and improves accuracy compared to the baseline learning algorithms.