This paper presents cppPosit, a C++ open source library to represent real numbers according to a novel format called Posit, recently introduced in literature. The library offers an unique implementation of numeric traits and templatization making it able to be hot-swapped in a machine learning library with very few modifications. The capabilities of the library are shown benchmarking the k-Nearest Neighbours (k-NN) algorithm, a widely used routine in machine learning. The k-NN algorithm can be implemented using different data structures. One of the most interesting and widely used is the kd-tree. In this work we first improve the kd-tree data structure, in order to make it more accurate and robust when working with small reals, i.e., floating point real numbers represented on a low number of bits (16, 14, 12, 10 or even 8). Then we compare the accuracy of the kd-tree based k-NN algorithm for different choices of Floats and Posits, showing how the Posit format assures a higher accuracy, when using the same number of total bits. Finally we also compare the accuracy of several Deep Neural Networks (DNNs) when using posits and standard 32-bit floating point numbers, showing the little-to-none drop in accuracy when using 16-bit or even 8-bit posit precisions.