Federated Learning serves as a distributed framework for machine learning. Traditional approaches to federated learning often assume the independence and identical distribution (IID) of client data. However, real-world scenarios frequently feature personalized characteristics in client data, deviating from the IID assumption. Additionally, challenges such as substantial communication overhead and limited resources at edge nodes hinder the practical implementation of federated learning. In response to the challenges in deploying federated learning, including uneven data distribution, communication bottlenecks, and resource limitations at edge nodes, this paper introduces an individualized federated learning framework based on model pruning. This framework effectively adapts the client's local model to the personalized distribution of local data while meeting the model aggregation requirements on the server. Utilizing sparse operations, the framework achieves personalized model pruning, efficiently compresses model parameters, and reduces computational load on edge nodes. Presently, our approach demonstrates a compression ratio of 3.8% on the non-IID dataset Feminist without compromising final training accuracy, resulting in a 12.3% acceleration in training speed.