Federated learning (FL) is a novel approach where user devices (UDs) train a shared model without exchanging their data, effectively preserving privacy in distributed machine learning. When FL is applied in wireless networks, one particular issue is the transmission bottleneck in the uplink, as most UDs are expected to transmit updated parameters to the server almost simultaneously. This paper proposes a topology-based approach to address this issue, with the objective of reducing the total uplink transmission time. The key idea of our approach is to apply the Tree All-reduce (TAR), a parameter aggregation scheme enabled by D2D communications, to identify the optimal transmission sequence and topology in wireless FL. We formulate the TAR structure formation as an NP-hard MINLP problem, and then solve the problem using two algorithms based on Minimum Spanning Tree and Upper Confidence bound applied to Trees, respectively. Theoretical analysis and numerical results show that our approach can significantly reduce the total uplink transmission time compared to existing benchmarks. Particularly, the reduction can be over $80\%$ in scenarios with high intensity of UDs.