(712a) Efficient Local Multi-Fidelity Optimization of High-Dimensional Objective Functions Using Cost-Aware Gradient Entropy Search (CAGES)
AIChE Annual Meeting
2024
2024 AIChE Annual Meeting
Computing and Systems Technology Division
10C: Data-driven Optimization
Thursday, October 31, 2024 - 3:30pm to 3:51pm
In many real-world scenarios, we have access to multiple lower-fidelity approximations of the true expensive objective function, with each approximation differing in terms of accuracy and cost. For example, in RL problems, one can adjust knobs related to the accuracy of the simulation environment, control policy, or uncertainty quantification, all of which reduce the cost of estimating the value of the reward. Global BO has been shown to be capable of adapting to such multi-fidelity optimization settings, however, these methods are not scalable to high-dimensional problems. Furthermore, existing multi-fidelity BO methods [4,5] make strong assumptions about the relationship between the different fidelity levels, which may not be satisfied in practice. In this work, we propose a new local multi-fidelity BO algorithm called Cost-Aware Gradient Entropy Search (CAGES). CAGES relies on a latent variable Gaussian process (LVGP) [6] to model the multi-fidelity objective functions that enables the relationship between the different information sources (approximations) to be learned from data. This implies we do not need any prior knowledge about how these approximations are ordered (e.g., which approximation most closely matches the true objective). CAGES further takes advantage of a newly proposed acquisition function that quantifies the information gain per query cost where information is measured in terms of Shannon differential entropy. Using multiple synthetic and RL problems, we demonstrate the superior performance capabilities of CAGES compared to several other global and local derivative-free optimization methods. We observe that CAGES consistently finds higher-quality solutions at a fraction of the cost compared to the tested alternatives.
References:
[1] Frazier, P. I. (2018). A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811.
[2] Kandasamy, K., Schneider, J., & Póczos, B. (2015, June). High dimensional Bayesian optimisation and bandits via additive models. In International conference on machine learning (pp. 295-304). PMLR.
[3] Müller, S., von Rohr, A., & Trimpe, S. (2021). Local policy search with Bayesian optimization. Advances in Neural Information Processing Systems, 34, 20708-20720.
[4] Kandasamy, K., Dasarathy, G., Oliva, J. B., Schneider, J., & Póczos, B. (2016). Gaussian process bandit optimisation with multi-fidelity evaluations. Advances in neural information processing systems, 29.
[5] Poloczek, M., Wang, J., & Frazier, P. (2017). Multi-information source optimization. Advances in neural information processing systems, 30.
[6] Zhang, Y., Tao, S., Chen, W., & Apley, D. W. (2020). A latent variable approach to Gaussian process modeling with qualitative and quantitative factors. Technometrics, 62(3), 291-302.
[7] Jones, D. R., Schonlau, M., & Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13, 455-492.