AUTHOREA
Log in
Sign Up
Browse Preprints
LOG IN
SIGN UP
Please note
: Importing new articles from Word documents is currently unavailable. We are working on fixing this issue soon and apologize for any inconvenience.
Raphael Wanjiku
Public Documents
2
Transfer learning data adaptation using conflation of low-level textural features
Raphael Wanjiku
and 2 more
August 17, 2022
Adapting the target dataset for a pre-trained model is still challenging. These adaptation problems result from a lack of adequate transfer of traits from the source dataset; this often leads to poor model performance resulting in trial and error in selecting the best performing pre-trained model. This paper introduces the conflation of source domain low-level textural features extracted using the first layer of the pretrained model. The extracted features are compared to the conflated low-level features of the target dataset to select a higher quality target dataset for improved pre-trained model performance and adaptation. From comparing the various probability distance metrics, Kullback-Leibler is adopted to compare the samples from both domains. We experiment on three publicly available datasets and two ImageNet pre-trained models used in past studies for results comparisons. This proposed approach method yields two categories of the target samples with those with lower Kullback-Leibler values giving better accuracy, precision and recall. The samples with the lower Kullback-Leibler values give a higher margin accuracy rate of 6.21% to 7.27%, thereby leading to better model adaptation for target transfer learning datasets and tasks
Dynamic Fine-tuning Layer Selection Using Kullback-Leibler Divergence
Raphael Wanjiku
and 2 more
March 25, 2022
The selection of layers in the transfer learning fine-tuning process ensures a pre-trained model’s accuracy and adaptation in a new target domain. However, the selection process is still manual and without clearly defined criteria. If the wrong layers in a neural network are selected and used, it could lead to poor accuracy and model generalisation in the target domain. This paper introduces the use of Kullback-Leibler divergence on the weight correlations of the model’s convolutional neural network layers. The approach identifies the positive and negative weights in the ImageNet initial weights selecting the best-suited layers of the network depending on the correlation divergence. We experiment on four publicly available datasets and four ImageNet pre-trained models that have been used in past studies for results comparisons. This proposed approach method yields better accuracies than the standard fine-tuning baselines with a margin accuracy rate of 10.8% to 24%, thereby leading to better model adaptation for target transfer learning tasks.