(624f) Exploiting High-Throughput Experiments in Bayesian Optimization
AIChE Annual Meeting
2022
2022 Annual Meeting
Computing and Systems Technology Division
Data-driven optimization
Thursday, November 17, 2022 - 2:05pm to 2:24pm
Bayesian optimization (BO) has proven to be one of the most effective machine learning (ML) algorithms for DoE [6]. BO is widely used in applications such as hyper-parameter tuning of ML models and reinforcement learning, and it has been shown to be a sample-efficient learning algorithm [7]. Of particular interest to researchers is the flexibility of BO; specifically, this approach is capable of accommodating both continuous and discrete parameters [8]. Unfortunately, the inherently sequential nature of BO makes it incompatible with DoE on HTE platforms. Ad-hoc modifications to the BO algorithm that would give it parallelization capabilities have been developed [9, 10, 11], and experimental results have demonstrated that these approaches can provide better performance than sequential BO [12]. However, these approaches are limited in the degree of parallelization that can be achieved, and can increase the complexity of the BO algorithm, making it slower and more difficult to implement.
In this work, we propose strategies for parallelizing BO algorithms and with this exploit HTE platforms. These strategies are centered around modifications to the optimization routine of the acquisition function (AF), which serves as the decision-making mechanism for BO. Our approaches are focused around new and effective ways for partitioning the parameter space, allowing the AF to select multiple sampling points in tandem; we then assign a module to explore and optimize a specified objective in each partition. By processing the obtained data into a central model that is shared with each module, we ensure that they are able to observe global rather than local trends which further improves and accelerates the optimization routine. The methods we propose are scalable to any desired number of experiments, fully parallel, and designed to prevent redundant sampling. We apply our approach to a case study for a chemical reactor network where the aim is to select the temperature of each reactor that will minimize the yearly operating cost of the system. In addition to sequential BO, we also compare the performance of our parallel BO algorithm with existing parallelization techniques found in the literature such as Hyperspace [13], NxMC [14], and AF optimization over a set of exploratory parameters [15].
[1] Thanh Nhat Nguyen, Thuy Tran Phuong Nhat, Ken Takimoto, Ashutosh Thakur, Shun Nishimura, Junya Ohyama, Itsuki Miyazato, Lauren Takahashi, Jun Fujima, Keisuke Takahashi, and Toshiak Taniike. High-throughput experimentation and catalyst informatics for oxidative coupling of methane. ACS Catalysis, 10(2):921â932, 2012.
[2] Steven M. Mennen, Carolina Alhambra, C. Liana Allen, Mario Barberis, Simon Berritt, Thomas A. Brandt, Andrew D. Campbell, Jesús Castañón, Alan H. Cherney, Melodie Christensen, David B. Damon, J. Eugenio de Diego, Susana GarcÃa-Cerrada, Pablo GarcÃa-Losada, Rubén Haro, Jacob Janey, David C. Leitch, Ling Li, Fangfang Liu, Paul C. Lobben, David W. C. MacMillan, Javier Magano, Emma McInturff, Sebastien Monfette, Ronald J. Post, Danielle Schultz, Barbara J. Sitter, Jason M. Stevens, Iulia I. Strambeanu, Jack Twilton, Ke Wang, and Matthew A. Zajac. The evolution of high-throughput experimentation in pharmaceutical development and perspectives on the future. Organic Process Research & Development, 23(6):1213â1242, 2019.
[3] Michael J. Smanski, Hui Zhou, Ben Shen Claesen, Michael A. Fischbach, and Christopher A. Voigt. Synthetic biology to access and expand natureâs chemical diversity. Nature Reviews Microbiology, 14:135â149, 2016.
[4] Joshua A. Selekman, Jun Qiu, Kristy Tran, Jason Stevens, Victor Rosso, Eric Simmons, Yi Xiao, and Jacob Janey. High-throughput automation in chemical process development. Annual Review of Chemical and Biomolecular Engineering, 8:525â547, 2017.
[5] Michae Shevlin. Practical high-throughput experimentation for chemists. ACS Medicinal Chemistry Letters, 8(6):601â607, 2017
[6] Arpan Biswas, Anna N. Morozovska, Maxim Ziatdinov, Eugene A. Eliseev, and Sergei V. Kalinin. Multi-objective Bayesian optimization of ferroelectric materials with interfacial control for memory and energy storage applications. Journal of Applied Physics, 130(20):204102â1â204102â1, 2021
[7] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mr Prabhat, and Ryan Adams. Scalable Bayesian optimization using deep neural networks. In International Conference on Machine Learning, pages 2171â2180. PMLR, 2015.
[8] Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
[9] David Ginsbourger, Rodolphe Le Riche, and Laurent Carraro. Kriging is well-suited to parallelize optimization. In Computation Intelligence in Expensive Optimization Problems, pages 131â162. Springer, 2010.
[10] Thomas Desautels, Andreas Krause, and Joel W. Burdick. Parallelizing exploration- exploitation tradeoffs in Gaussian process bandit optimization. Journal of Machine Learning Research, 15(119):4053â4103, 2014.
[11] Sébastian Marmin, Clément Chevalier, and David Ginsbourger. Differentiating the multipoint expected improvement for optimal batch design. In Machine Learning, Optimization, and Big Data. 2015.
[12] M. Todd Young, Jacob D. Hinkle, Ramakrishnan Kannan, and Arvind Ramanathan. Distributed Bayesian optimization of reinforcement learning algorithms. Journal of the Parallel and Distributed Computing, 139(1):43â52, 2020.
[13] M. Todd Young, Jacob Hinkle, Arvind Ramanathan, and Ramakrishnan Kannan. Hyperspace: Distributed Bayesian hyperparameter optimization. In 2018 30th International Symposium on Computer Architecture and High Performance Computing, pages 339â347. IEEE, 2018.
[14] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, volume 25, pages 2951â2959. Curran Associates, Inc., 2012.
[15] Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Parallel algorithm configuration. In International Conference on Learning and Intelligent Optimization, pages 55â70. Springer, 2012.