(183d) Flowsheet Synthesis through Graph-Based Reinforcement Learning
AIChE Annual Meeting
2022
2022 Annual Meeting
Computing and Systems Technology Division
Advances in Process Design
Monday, November 14, 2022 - 4:15pm to 4:30pm
We propose a reinforcement learning algorithm for chemical process design based on a state-of-the-art actor-critic logic. We implement a hierarchical and hybrid decision-making process to generate flowsheets, where unit operations are placed iteratively as discrete decisions and corresponding design variables are selected as continuous decisions. This presentation extends the preceding research [1-5] by enhancing the state representation of the flowsheets from a matrix data structure to a graph data structure. Practically, this allows for an alternative workflow to process flowsheets by substituting convolutional neural networks with graph convolutional neural networks. Secondly, the agent selects both unit operations and continuous design variables, which was not achieved in previous works. This entails that the action space for the agent includes discrete and continuous decisions and is therefore called a hybrid action space.
We demonstrate the potential of our method to design economically viable flowsheets in an illustrative case study for the production process of Methyl Acetate. The case study comprises equilibrium reactions, azeotropic separation, and recycles. The agent implements a number of unit operations including plug flow reactors, heat exchangers, distillation columns, and recycles. A process simulator is interfaced and acts as the environment for the reinforcement learning agent. In addition, shortcut unit models are implemented in Python to enable efficient pre-training in a transfer learning approach. The results show quick learning in discrete, continuous, and hybrid action spaces. Due to the flexible architecture of the proposed reinforcement learning agent, the method can be extended to larger action-state spaces and other case studies in future research. The presented concept is a significant step towards solving realistic chemical process design tasks using artificial intelligence.
References
[1] Midgley, L. I. (2020). Deep Reinforcement Learning for Process Synthesis. arXiv preprint arXiv:2009.13265.
[2] Khan, A., & Lapkin, A. (2020). Searching for optimal process routes: a reinforcement learning approach. Computers & Chemical Engineering, 141, 107027.
[3] Göttl, Q., Grimm, D., & Burger, J. (2021). Automated Process Synthesis Using Reinforcement Learning. In Computer Aided Chemical Engineering (Vol. 50, pp. 209-214). Elsevier.
[4] Khan, A. A., & Lapkin, A. A. (2022). Designing the process designer: Hierarchical reinforcement learning for optimisation-based process design. Chemical Engineering and Processing-Process Intensification, 108885.
[5] Göttl, Q., Grimm, D. G., & Burger, J. (2022). Automated synthesis of steady-state continuous processes using reinforcement learning. Frontiers of Chemical Science and Engineering, 16(2), 288-302.