(303d) Symmetry Reduction for Deep Reinforcement Learning Active Flow Control of Chaotic Spatiotemporal Dynamics
AIChE Annual Meeting
2021
2021 Annual Meeting
Computing and Systems Technology Division
Data-Driven Techniques for Dynamic Modeling, Estimation and Control II
Tuesday, November 9, 2021 - 1:27pm to 1:46pm
Deep reinforcement learning (RL) is a data-driven, model-free method capable of discovering complex control strategies for macroscopic objectives in high-dimensional systems, making its application towards flow control promising. Many systems of flow control interest possess symmetries that, when neglected, can significantly inhibit the learning and performance of a naive deep RL approach. Using a test-bed consisting of the Kuramoto-Sivashinsky Equation (KSE), equally spaced actuators, and a goal of minimizing dissipation and power cost, we demonstrate that by moving the deep RL problem to a symmetry-reduced space, we can alleviate limitations inherent in the naive application of deep RL. We demonstrate that symmetry-reduced deep RL yields improved data efficiency as well as improved control policy efficacy and dynamical consistency compared to policies found by naive deep RL. The symmetry-reduced control policy learns to discover and target a forced equilibrium, related to a known equilibrium of the KSE, that exhibits low dissipation and power input cost, despite having been given no explicit information regarding its existence. Finally, we demonstrate that the symmetry-reduced control policy is robust to observation and actuation signal noise, as well as to system parameters it has not observed before. We aim to extend our method to consider sensor symmetries and to incorporate data-driven reduced-order models to leverage more natural state representations and the ability to generate surrogate interaction data for accelerated training.