Identifying 3D human walking poses in unconstrained environments has many applications such as enabling prosthetists and clinicians to access the amputees’ walking functions outside clinics and helping amputees obtain an optimal walking condition with predictive control. Thus, we propose the wearable motion capture problem of reconstructing and predicting 3D human poses from the wearable IMU sensors and wearable cameras. To solve this challenging problem, we introduce a novel Attention-Oriented Recurrent Neural Network (AttRNet) that contains a sensor-wise attention-oriented recurrent encoder, a reconstruction module, and a dynamic temporal attention-oriented recurrent decoder, to reconstruct the current pose and predict the future poses. To evaluate our approach, we collected a new WearableMotionCapture dataset using wearable IMUs and wearable video cameras, along with the musculoskeletal joint angle ground truth. The proposed AttRNet shows high accuracy on theWearableMotionCapture dataset, and it also outperforms the current best methods on two public pose prediction datasets with IMU-only data: DIP-IMU and TotalCaputre. The source codes and the new dataset will be publicly available on https://github.com/MoniruzzamanMd/Wearable-Motion-Capture.