Federated Learning (FL) has established itself as a widely accepted distributed paradigm. Without sharing data, it may seem like a privacy-preserving paradigm, but recent studies have revealed vulnerabilities in weight sharing which results in information disclosure. Hence, privacy-preserving approaches must be incorporated during aggregation to avoid disclosures. In the literature of FL, not much focus has been given on generating generalized models which can be generated by multiple sets of datasets thus avoiding identity disclosure. Integrally private models are the models which recur frequently from different datasets. So, in this paper we focus on generating the integrally private global models proposing k-Anonymous Integrally Private Federated Averaging (k-IPfedAvg), a novel aggregation algorithm which clusters similar user weights to compute a global model which can be generated by multiple sets of users. Convergence analysis of k-IPfedAvg reveals a rate of O(1 T) over training epochs. Furthermore, the experimental analysis shows that k-IPfedAvg maintains a consistent level of utility across various privacy parameters in contrast to existing noise based privacypreserving mechanisms. We have compared k-IPfedAvg with classical fedAvg and its differentially private counterpart. Our results shows that k-IPfedAvg has comparable accuracy score with baseline fedAvg and outperforms DP-fedAvg on iid and non-iid distributions of MNIST, FashionMNIST and CIFAR10 datasets.