Federated Learning (FL) is a distributed machine learning (ML) approach that allows multiple devices to train a model while keeping training data private on edge devices. However, existing FL libraries are incapable of: (i) supporting a wide range of existing FL algorithms, (ii) handling diverse public as well as custom datasets, (ii) discarding unreliable updates from malicious edge devices, and (iv) facilitating edge-device training. With the rapid progression in AI, there is an increment in the computation power while training the models on edge devices which results in more carbon emission (CE). However, in the existing FL frameworks, there is no module for estimating CE. In this paper, we present \textbf{FedERA}, a modular and fully customizable open-source FL framework aiming to address these issues especially also by offering extensive support on heterogeneous edge devices and by incorporating both standalone and distributed training approaches. The integration of new software modules unforeseen in existing FL frameworks not only amplifies the scope of its usability but also fosters environment-friendly FL. We believe that FedERA will be a valuable tool for researchers and practitioners working in FL. Source codes of this library along with documentation and tutorials are available at: https://github.com/anupamkliv/FedERA.