In this work, we propose a novel training paradigm designed to support transfer learning for more effective classification in multispectral airborne imagery. Current state-of-the-art approaches typically rely on either leveraging solely RGB (red-green-blue) pretraining or applying in-domain transfer learning for multispectral imagery classification. Instead, our approach constructs and trains two separate neural network models (backbones): one specifically for wavelengths with available pretrained data (like visible bands) and another trained from scratch on all bands available in the dataset. These models are then integrated with a fully-connected layer or multi-layered perceptron, which is trained on the features from both networks. This allows us to exploit the significant benefits of generalizable features learned from RGB datasets and the information provided by the full spectrum of multispectral bands. We employ the BigEarthNet and EuroSAT datasets, encompassing Sentinel-2 satellite imagery in the visual and infrared bands. This approach yields considerable performance gains across every evaluation metric we utilized for these datasets. The results are also consistent across a variety of backbone architectures, underlining the efficacy of our transfer learning technique in the analysis of multispectral data.