(386b) Model-Based Reinforcement Learning Algorithms for Feedback Control of Complex Dynamic Systems | AIChE

(386b) Model-Based Reinforcement Learning Algorithms for Feedback Control of Complex Dynamic Systems

Authors 

Jorgensen, C., Rensselaer Polytechnic Institute
Bequette, B. W., Rensselaer Polytechnic Institute
The successful use of deep Reinforcement Learning (RL) in controlling hard-to-control dynamic systems such as the Cart-Pole, Inverted-Pendulum, and Robotic arms has provided a good opportunity for improving popular control methods such as model predictive control (MPC). Model-free RL algorithms can learn the best policies for controlling complex manufacturing processes by leveraging the information available from smart sensors in a Smart Manufacturing environment. Limitations of using these RL methods include large data requirements, stability guarantees, and handling state constraints. Model-based and hybrid RL algorithms offer opportunities for tackling these limitations.

In this research, we employ several state-of-the-art model-based RL algorithms for feedback control of the challenging van de vusse reactor and quadruple tank systems, which have right-half-plane zeros and right-half-plane transmission zeros, respectively. We also enforced state constraints using primal-dual RL methods and constrained policy optimization (CPO). Our results show that Model-based RL algorithms are promising directions for improving the performance of feedback control techniques.

Keywords: Reinforcement Learning (RL), Model Predictive Control (MPC), Model-based Reinforcement Learning (MBRL) and Primal-dual methods.