Real-world 3D structured data like point clouds and skeletons often can be represented as data in a 3D rotation group (denoted as $\mathbb{SO}(3)$). However, most existing neural networks are tailored for the data in the Euclidean space, which makes the 3D rotation data not closed under their algebraic operations and leads to sub-optimal performance in 3D-related learning tasks. To resolve the issues caused by the above mismatching between data and model, we propose a novel non-real neuron model called \textit{quaternion product unit} (QPU) to represent data on 3D rotation groups. The proposed QPU leverages quaternion algebra and the law of the 3D rotation group, representing 3D rotation data as quaternions and merging them via a weighted chain of Hamilton products. We demonstrate that the QPU mathematically maintains the $\mathbb{SO}(3)$ structure of the 3D rotation data during the inference process and disentangles the 3D representations into “rotation-invariant’‘ features and “rotation-equivariant” features, respectively. Moreover, we design a fast QPU to accelerate the computation of QPU. The fast QPU applies a tree-structured data indexing process, and accordingly, leverages the power of parallel computing, which reduces the computational complexity of QPU in a single thread from $\mathcal{O}(N)$ to $\mathcal {O}(\log N)$. Taking the fast QPU as a basic module, we develop a series of quaternion neural networks (QNNs), including quaternion multi-layer perceptron (QMLP), quaternion message passing (QMP), and so on. In addition, we make the QNNs compatible with conventional real-valued neural networks and applicable for both skeletons and point clouds. Experiments on synthetic and real-world 3D tasks show that the QNNs based on our fast QPUs are superior to state-of-the-art real-valued models, especially in the scenarios requiring the robustness to random rotations.