Point clouds serve as a vital component in computer vision and robotics, enabling the representation and processing of three-dimensional data. However, their utility is often limited by significant variations in data across domains, impeding the transfer of knowledge between different scenarios. Beside, the existing approaches implement domain adaptation on Point clouds only with single source domain setting. To address this issue, we conduct the first investigation of multi-source domain adaptation for point clouds using the Domain-Invariant-Feature-Learning (DIFL) method. Our approach tackles data variability between domains by integrating multi-domain contrastive learning and pseudo-label guided fine-tuning, yielding a domain-invariant feature representation. Multi-domain contrastive learning maximizes agreement among multiple source point clouds in an invariant feature space, promoting invariance between source and target features. Additionally, pseudo-label guided fine-tuning maximizes the discrepancy by further aligning the pseudo-centroids of each class between the source and target domains. Our experiments demonstrate the superior performance of the DIFL method compared to existing techniques, signifying a significant advancement in the field of multi-source domain adaptation for point clouds.