In the field of unsupervised feature selection, sparse principal component analysis (SPCA) methods have attracted more and more attention recently. Compared to spectral-based methods, SPCA methods donâ\euro™t rely on the construction of a similarity matrix and show better feature selection ability on real-world data. Existing convex SPCA methods reformulate SPCA as a convex model by regarding the reconstruction matrix as an optimization variable. However, they are lack of constraints equivalent to the orthogonality restriction in SPCA, leading to larger solution space. In this paper, a standard convex SPCA-based model for unsupervised feature selection is proposed and proven to have its optimal solution falling onto the Positive Semidefinite (PSD) Cone. Further, a two-step fast optimization algorithm via PSD projection is presented to solve the proposed model (SPCA-PSD), whose computational complexity is linear to the number of samples. Two other existing convex SPCA-based modelsâ\euro™ optimal solution space is also proven to be the PSD cone. Therefore, the PSD versions of these two models are proposed to accelerate their convergence. Experiments on synthetic and real-world datasets demonstrate the effectiveness and efficiency of the proposed methods.Â