(372ak) A Practical Reinforcement Learning (RL)-Based Controller Design for Distillation Columns
AIChE Annual Meeting
2024
2024 AIChE Annual Meeting
Computing and Systems Technology Division
10B: AI/ML Modeling, Optimization and Control Applications I
Thursday, October 31, 2024 - 12:46pm to 1:02pm
Recently, control techniques based on model-free reinforcement learning (RL) have emerged as attractive alternatives to NMPC, aiming to overcome challenges in model accuracy for MPC implementations. RL's capacity to learn optimal policies in real-time from controller-process interactions, along with its ability to handle process variability and non-linearity, makes it an appealing option for process control applications [4]. However, the reliance on extensive random interactions between the RL agent and the process makes existing RL implementations impractical, often necessitating the use of good process models for training. Offline training of RL policies becomes crucial due to the high costs and safety risks associated with online exploration, yet this approach relies heavily on the availability of accurate process models, raising implementation hurdles in their absence [5]. In distillation columns, creating an environment where the online RL agent can interact with the system while having an exact first-principles model or an accurate data-driven model to pre-train the RL agent is particularly challenging. Another possible critique is that if an accurate nonlinear model of the process is available, why not utilize it to implement MPC, which may be more interpretable and therefore preferred by practitioners?
Due to the challenges with developing and maintaining first-principles models and the unavailability of rich data for developing nonlinear data-driven models, a representative industrial MPC formulation (developed using a linear model identified via step tests) is leveraged, offline, to pre-train an RL agent [6]. It is demonstrated that the pre-trained agent can mimic the MPC performance. This agent is then used for online control to interact with the process and improve its performance compared to the representative MPC. Inspired by this and leveraging previous research on large-scale processes, this study aims to employ MPC using a pre-trained RL agent to control a distillation column. An ASPEN dynamic software simulation of the ethylene splitter (C2 splitter) is utilized as the test bed. Operational data along with step tests (those used to design standard offset-free MPC) will be used to develop and pre-train and RL agent. This pretrained RL agent will be applied to real-time simulation in ASPEN dynamic, highlighting the effectiveness of RL-based control design. Furthermore, the framework will be tested under flooding conditions, a crucial consideration for practical implementation.
References
[1] Riggs, J.B., 2000. Comparison of Advanced Distillation Control Methods, Final Technical Report (No. DOE/AL/98747-5). Texas Tech Univ., Lubbock TX (US).
[2] Shin, Y., Smith, R. and Hwang, S., 2020. Development of model predictive control system using an artificial neural network: A case study with a distillation column. Journal of Cleaner Production, 277, p.124124.
[3] Jalanko, M., Sanchez, Y., Mhaskar, P. and Mahalec, V., 2021. Flooding and offset-free nonlinear model predictive control of a high-purity industrial ethylene splitter using a hybrid model. Computers & Chemical Engineering, 155, p.107514.
[4] S. P. K. Spielberg, R. B. Gopaluni and P. D. Loewen, "Deep reinforcement learning approaches for process control," 2017 6th International Symposium on Advanced Control of Industrial Processes (AdCONIP), Taipei, Taiwan, 2017, pp. 201-206, doi: 10.1109/ADCONIP.2017.7983780.
[5] Bellegarda, G. and Byl, K., 2020, October. An online training method for augmenting mpc with deep reinforcement learning. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5453-5459). IEEE.
[6] Hassanpour, H., Wang, X., Corbett, B. and Mhaskar, P., 2024. A practically implementable reinforcement learningâbased process controller design. AIChE Journal, 70(1), p.e18245.