Remote sensing technologies, such as LiDAR, produce billions of points that commonly exceed the storage capacity of the GPU, hindering their processing and rendering stages. Level of detail (LOD) techniques have been widely investigated, but arranging the LOD structures is also time-consuming. In this study, we propose a GPU-driven culling system focused on determining the number of points visible in every frame. It can manipulate point clouds of any arbitrary size while maintaining a low memory footprint in both the CPU and GPU. Instead of organizing point clouds into hierarchical data structures, these are split into meshlets sorted using the Hilbert curve. This alternative encoding alleviates the occurrence of anomalous groups found in Morton curves. Instead of keeping the entire point cloud in the GPU, points are transferred on demand to ensure real-time capability. Accordingly, our solution can manipulate huge point clouds even in commodity hardware with low memory capacities, such as edge computing systems. Moreover, hole filling is implemented to fill the gaps inherent in point cloud rendering. Notably, our proposal can handle point clouds of 2 billion points maintaining more than 150 frames per second (FPS) on average without any perceptive quality loss. We evaluate our approach through numerous experiments conducted over real-world data by comparing it with state-of-the-art methods.