Deploying a data-driven model in a real environment is challenging due to the fact that the distribution of the input data is usually not the same as that of the training data. The distribution change can considerably degrade the reliability and robustness of the model output. In this study, we develop a chance constrained back-mapping approach to transform the features of input data with a varying distribution into the distribution of the training dataset, based on the Kantorovich transportation. Since the Kantorovich method cannot guarantee a truthful backmapping of samples with highly variable size and distribution, we introduce a constraint to restrict the transformation by ensuring a prospect certainty of high accuracy of the model output. In addition, to reflect the data variability, the upper bound of the constraint is considered as a random parameter whose distribution is estimated online using the rolling average method. Furthermore, chance-constrained optimization is used to satisfy this constraint in a probabilistic manner. Therefore, the proposed approach enssures a high certainty of high accuracy of the trained model when deployed in a variable environment. Both analytical and experimental verification of the approach are carried out. The results of synthetic and real datasets demonstrate a clear improvement in terms of accuracy and robustness compared with the existing methods.