Graph Neural Networks (GNNs) have experienced a significant surge in adoption across various domains, including recommender systems and biology, demonstrating their efficacy in addressing a wide range of classification and regression tasks. As GNNs become increasingly integral in these applications, the need for explainability methods to interpret and understand their decisions has become paramount. Existing explainability methods for GNNs predominantly focus on subgraph or counterfactual explanations through edge modification. In this study, we propose a new counterfactual method called CF-GNNFeatures that generates counterfactual explanations for node classification tasks using minimal perturbations of node features via stochastic optimization. Extensive experimental evaluation demonstrates the effectiveness of our proposed approach in real-world datasets. Furthermore, we observe an intriguing phenomenon: the quality of generated explanations tends to correlate with the level of training of the underlying predictive model, particularly in the case of perturbation-based optimization. Specifically, the more the model overfits the data, the easier it becomes to generate a counterfactual explanation. This finding advocates for a trade-off between the generalizability and explainability of a model.