Manisha Chawla

and 3 more

Distributed training of deep learning models on resource-constrained devices has gained significant interest. Federated Learning (FL) and Split learning (SL) have become the most popular way to do this task. Training data-driven deep learning models in FL/SL involves collaboration between several clients while ensuring user privacy. We aim to optimize these techniques by reducing device computation during parallel model training, and also reducing high communication costs due to model or frequent data and gradient exchanges. This paper proposes Efficient Split Learning (ESL), a novel approach addressing these challenges through three key ideas: (1) a keyvalue store for caching and sharing intermediate activations across clients, significantly reducing redundant computations and communication during the training phase, (2) customization of state-of-the-art neural networks for split learning context, and (3) personalized training allowing clients to learn individual models tailored to their specific data distributions. Unlike previous methods, ESL prioritizes performance optimization while minimizing communication and computation overhead. Extensive experimentation on real-world federated benchmarks for image classification and 3D segmentation demonstrates significant improvements over baseline FL techniques: ESL achieves a reduction in computation by 1623x for image classification and 23.9X for 3D segmentation on resource-constrained devices. Additionally, it reduces communication traffic, during training, between clients and the server by 3.92x for image classification and 1.3x for 3D segmentation, while improving accuracy by 35% and 31%, respectively. Furthermore, when compared to the baseline SL approaches, ESL reduces communication traffic during training by 60x and improves accuracy by an average of 34.8%.