Graph structure learning (GSL) aims to assist graph neural networks to yield effective node embeddings for downstream tasks, especially in scenarios with the absence of structures or the existence of unreliable edges. Most GSL models are built on the i.i.d. assumption across training and testing data. However, this assumption can be violated where test data contains out-of-distribution samples. Consequently, those models are limited in generalization, which will lead to a poor structure. On the other hand, while they have made great progress, additional optimized parameters are required due to their implementation with parametric models. To tackle the above problems, we propose a novel Generalizable and Non-parametric Structure learning framework named GNS, which can be easily and effectively applied to various tasks. GNS neither relies on i.i.d. assumption nor even involves any parameters being optimized, instead to find an appropriate similarity between nodes and an associated threshold to establish desirable structures. Specifically, we first incorporate the candidate neighbor distributions for nodes to refine the similarity. Then, we introduce an adaptive threshold discovery method inspired by Fisher's criterion to determine final structures. Extensive experiments demonstrate that GNS excels not only in OOD scenarios but also in the general classification and regression prediction tasks.