Communication overhead has become one of the major constraints on the application of federated learning (FL). To reduce the overhead by trading off between the number of communication rounds and per-round latency, significant research efforts have been devoted to investigating joint optimization of model compression, client scheduling, and resource allocation to reduce the total training time. In order to reduce the complexity of joint optimization, the existing methods only consider the same compression level, unchanged participating clients, and identical round duration during training, resulting in low resource usage efficiency. In this paper, we propose a flexible model compression and resource allocation scheme to minimize the total communication time for FL in mobile networks. The proposed scheme can assign adaptive compression levels and communication resources to each client in each round. Simulation results show that the proposed scheme outperforms the start-of-the-art methods and is robust to outdated channel information.