Walking gait data measured using force platforms is a promising means for person re-identification in authentication and surveillance scenarios. We aimed to determine the most discriminant components of force platform data using a two-stream Convolutional Recurrent Neural Network (KineticNet). Each network in the two-stream architecture extracts features pertaining to a single stance phase and then these features are fused to represent the entire gait cycle. Over two sessions, ground reaction forces (Fx, Fy, Fz), moments (Mx, My, Mz), and center of pressure coordinates (Cx, Cy) were acquired from 118 participants as they walked our laboratory five times at preferred speed. For each participant and each session, up to three samples were reserved for network training, leaving one sample for network validation and one sample for network testing. KineticNet’s performance was evaluated using both individual component and multi-component inputs before ablation studies were conducted on its architecture. Fz was the most discriminant individual component, and re-identification using Fz, Fy, and Cy together was the most accurate overall at 96.02%. These results warrant further investigation into the utility of force platforms as an accessory or alternative to video cameras for gait based person re-identification.