(420a) Deep Model-Based Reinforcement Learning for Active Flow Control of Turbulent Couette Flow | AIChE

(420a) Deep Model-Based Reinforcement Learning for Active Flow Control of Turbulent Couette Flow

Authors 

Graham, M., University of Wisconsin-Madison
Design of active control strategies for turbulent drag reduction is a challenging task due to the complex nonlinear dynamics and the difficulty in devising good control targets. Deep reinforcement learning (RL), an emergent machine learning method capable of learning complex control strategies for high-dimensional systems from data, is able to address broad macroscopic control objectives, such as drag minimization, making it promising for discovering flow control strategies. However, the iterative RL process requires vast amounts of data to be generated from direct interactions with the target system. For high-dimensional and computationally demanding simulations (such as direct numerical simulations (DNS)) or fluid flow experiments this quickly becomes prohibitively expensive. We mitigate this challenge in a completely data-driven fashion by combining data-driven reduced-order models (ROM) of the flow system with deep RL. We demonstrate our method on a DNS of turbulent Couette flow modified with four independently controlled wall-normal gaussian jets on each wall for control. We set the control objective to be the minimization of the system's drag in addition to the cost of control. We learn a ROM of the high-fidelity DNS’ turbulent dynamics with control by combining an undercomplete autoencoder with a neural ordinary differential equation (ODE) to learn a compressed, low-dimensional model of the original system’s dynamics -- which is trained directly from collected data. This ROM is substituted in place of the high-fidelity DNS during RL training to accelerate the learning process. We then deploy our ROM-based control strategy to the original high-fidelity DNS for control validation.