Engineering design problems typically require optimizing a quality measure by finding the right combination of controllable input parameters. In additive manufacturing (AM), the output characteristics of the process can often be non-stationary functions of the process parameters. Bayesian Optimization (BO) is a methodology to optimize such “black-box” functions, i.e., the input-output relationship is unknown and expensive to compute. Optimization tasks involving “black-box” functions widely use BO with Gaussian Process (GP) regression surrogate model. Using GPs with standard kernels is insufficient for modeling non-stationary functions, while GPs with non-stationary kernels are typically over-parameterized. On the other hand, a Deep Gaussian Process (DGP) can overcome GPs’ shortcomings by considering a composition of multiple GPs. Inference in a DGP is challenging due to its structure resulting in a non Gaussian posterior, and using DGP as a surrogate model for BO is not straightforward. Stochastic Imputation (SI) based inference is promising in speed and accuracy for BO. This work proposes a bootstrap aggregation based procedure to effectively utilize the SI-based inference for BO with a DGP surrogate model. The proposed BO algorithm DGP-SI-BO is faster and empirically better than the state-of-the-art BO method in optimizing nonstationary functions. Several analytical test functions and a case study in metal additive manufacturing simulation demonstrate the applicability of the proposed method.

Bo Shen

and 3 more

In many scientific and engineering applications, Bayesian optimization (BO) is a powerful tool for hyperparameter tuning of a machine learning model, materials design and discovery, etc. BO guides the choice of experiments in a sequential way to find a good combination of design points in as few experiments as possible. It can be formulated as a problem of optimizing a “black-box” function. Different from single-task Bayesian optimization, Multi-task Bayesian optimization is a general method to efficiently optimize multiple different but correlated “black-box” functions. The previous works in Multi-task Bayesian optimization algorithm queries a point to be evaluated for all tasks in each round of search, which is not efficient. For the case where different tasks are correlated, it is not necessary to evaluate all tasks for a given query point. Therefore, the objective of this work is to develop an algorithm for multi-task Bayesian optimization with automatic task selection so that only one task evaluation is needed per query round. Specifically, a new algorithm, namely, multi-task Gaussian process upper confidence bound (MT-GPUCB), is proposed to achieve this objective. The MT-GPUCB is a two-step algorithm, where the first step chooses which query point to evaluate, and the second step automatically selects the most informative task to evaluate. Under the bandit setting, a theoretical analysis is provided to show that our proposed MT-GPUCB is no-regret under some mild conditions. Our proposed algorithm is verified experimentally on a range of synthetic functions as well as real-world problems. The results clearly show the advantages of our query strategy for both design point and task.