(522f) Integration of Control with Process Operation through Reinforcement Learning
AIChE Annual Meeting
2020
2020 Virtual AIChE Annual Meeting
Computing and Systems Technology Division
Advances in Machine Learning and Intelligent Systems II
Wednesday, November 18, 2020 - 9:15am to 9:30am
Despite the research effort in this area the development of systematic, computationally efficient and data-driven methods remains an open challenge. On top of that, the underlying dynamic optimization of the control problem suffers from three conditions: (i) there is no precise known model for most industrial-scale processes (plant model mismatch), leading to inaccurate predictions and convergence to suboptimal solutions; (ii) the process is affected by endogenous uncertainty (i.e. the system is stochastic) & (iii) state constraints must be satisfied due to operational and safety concerns. Therefore constraint violation can be detrimental. To solve the above problems, Reinforcement Learning (RL) Policy Gradient method is proposed, which satisfies chance constraints with probabilistic guarantees [10]. The resulting optimal policy is a neural network designed to satisfy the optimality conditions, and the optimal control actions can be evaluated fast as the policy requires only function evaluations. In this work, a novel framework for closed-loop integration iPSC under dynamic disturbances and uncertainty is proposed. The key elements are the integration of novel RL-based optimal policy for the control task of an uncertain dynamic physical system with the optimization-based algorithm for the efficient rescheduling that mitigates the impact of exogenous disturbances on the real-time implementation. Finally, the proposed framework is tested on the iPSC of a nonlinear industrial process illustrating its merits and providing key insights about the interdependence of the iPSC decisions and the importance of RL for its real-time implementation.
- Grossmann, I. E. (2012). Advances in mathematical programming models for enterprise-wide optimization. Comput. Chem. Eng., 47, 2-18.
- Chu, Y., and F. You. (2015) Model based integration of control and operations: Overview, challenges, advances, and opportunities. Comput. Chem. Eng. , 83, 2-20.
- Dias, L. S., & Ierapetritou, M. G. (2016). Integration of scheduling and control under uncertainties: Review and challenges. Chem. Eng. Res. Des., 116, 98-113.
- Georgiadis, G.P., Elekidis, A.P. and Georgiadis, M.C. (2019). Optimization-Based Scheduling for the Process Industries: From Theory to Real-Life Industrial Applications. Processes, 7, 438.
- Charitopoulos, V. M., Aguirre, A. M., Papageorgiou, L. G., & Dua, V. (2018). Uncertainty aware integration of planning, scheduling and multi-parametric control. Comput. Aid. Chem. Eng. 44, 1171-1176.
- Charitopoulos, V. M., Papageorgiou, L. G., & Dua, V. (2019). Closed-loop integration of planning, scheduling and multi-parametric nonlinear control. Comput. Chem. Eng., 122, 172-192.
- Burnak, B., Katz, J., Diangelakis, N. A., & Pistikopoulos, E. N. (2018). Simultaneous process scheduling and control: a multiparametric programming-based approach. Ind. Eng. Chem. Res., 57(11), 3963-3976.
- Du, J., Park, J., Harjunkoski, I., & Baldea, M. (2015). A time scale-bridging approach for integrating production scheduling and process control. Comput. Chem. Eng., 79, 59-69.
- Dias, L. S., & Ierapetritou, M. G. (2019). Data-driven feasibility analysis for the integration of planning and scheduling problems. Optim. Eng., 20(4), 1029-1066.
- Petsagkourakis, P., Sandoval, I. O., Bradford E., Zhang, D. & del Rio-Chanona, E. A. (2020). Constrained Reinforcement Learning for Dynamic Optimization under Uncertainty. Accepted for publication in 21st IFAC World Congress