Federated learning (FL) allows a large number of users to collaboratively train machine learning (ML) models by sending only their local gradients to a central server for aggregation in each training iteration, without sending their raw training data. The main security issues of FL, that is, the privacy of the gradient vector and the correctness verification of the aggregated gradient, are gaining increasing attention from industry and academia. To protect the privacy of the gradient, a secure aggregation was proposed; to verify the correctness of the aggregated gradient, a verifiable secure aggregation that requires the server to provide a verifiable aggregated gradient was proposed. In 2021, Hahn et al. proposed VERSA, a verifiable secure aggregation (DOI:10.1109/TDSC.2021.3126323). However, in this paper, we will point out a flaw in VERSA, which indicates that VERSA does not work. To address the flaw, we present several approaches with different advantages and disadvantages. We hope that by identifying the flaw, similar errors can be avoided in future designs for verifiable secure aggregation.