FL presents an encouraging system for collectively training machine learning models on decentralized devices while maintaining data privacy. However, the difficulties of restricted computational resources and communication bandwidth require efficient model representations. In this study, we explore how the integration of quantization methods like Post-Training Quantization (PTQ), Quantization-Aware Training (QAT) and Per-Layer Quantization into the FL pipeline can overcome these challenges. PTQ speeds up inference by decreasing model size, while QAT improves model resilience to quantization inaccuracies. Per-Layer Quantization offers a versatile way to maintain a balance between accuracy and efficiency. By conducting thorough analysis, we evaluate the balance between model accuracy and computational efficiency for diverse quantization approaches in different federated learning setups. Additionally, we explore the security and privacy impacts of quantization in federated learning, analyzing possible weaknesses and methods to mitigate them. Through meticulous handling of these trade-offs and addressing security issues, we show that quantization can enhance the effectiveness and feasibility of federated learning systems, leading to more resilient and efficient decentralized machine learning.