Deep Reinforcement Learning for Active Flow Control around a Circular
Cylinder Using Unsteady-mode Plasma Actuators
Abstract
Deep reinforcement learning (DRL) algorithms are rapidly making inroads
into fluid mechanics, following the remarkable achievements of these
techniques in a wide range of science and engineering applications. In
this paper, a deep reinforcement learning (DRL) agent has been employed
to train an artificial neural network (ANN) using computational fluid
dynamics (CFD) data to perform active flow control (AFC) around a 2-D
circular cylinder. Flow control strategies are investigated at a
diameter-based Reynolds number Re_D = 100 using advantage actor-critic
(A2C) algorithm by means of two symmetric plasma actuators located on
the surface of the cylinder near the separation point. The DRL agent
interacts with the computational fluid dynamics (CFD) environment
through manipulating the non-dimensional burst frequency (f+) of the two
plasma actuators, and the time-averaged surface pressure is used as a
feedback observation to the deep neural networks (DNNs). The results
show that a regular actuation using a constant non-dimensional burst
frequency gives a maximum drag reduction of 21.8 %, while the DRL agent
is able to learn a control strategy that achieves a drag reduction of
22.6%. By analyzing the flow-field, it is shown that the drag reduction
is accompanied with a strong flow reattachment and a significant
reduction in the mean velocity magnitude and velocity fluctuations at
the wake region. These outcomes prove the great capabilities of the deep
reinforcement learning (DRL) paradigm in performing active flow control
(AFC), and pave the way toward developing robust flow control strategies
for real-life applications.