Main Article Content
Deep reinforcement learning for aircraft longitudinal control augmentation system
Abstract
Control augmentation systems (CAS) are conventionally built with classical controllers which have the following drawbacks: dependence on domain specific knowledge for tuning and limited self-learning capability. Consequently, these drawbacks lead to sub-optimal aircraft stability and performance when exposed to time varying disturbances. Hence, to curb the stated problems; this paper proposes the development of a deep reinforcement learning (DRL) pitch-rate CAS (qCAS), aimed at guaranteeing adaptive stability, pitch-rate control tracking and disturbance rejection across the longitudinal dynamics of an aircraft. This stated aim was actualized by developing a CAS with a deep deterministic policy gradient (DDPG) agent. Subsequently, this proposed method was compared with two classical qCAS methods (a developed PID-aCAS and a benchmark PIqCAS obtained from literature). The results show that the developed DDPG-qCAS method outperformed the classical methods in peak overshoot, referemce command tracking and disturbance rejection as well as mean absolute error (MSE) and mean steady state error (MSSE). Hence, it can be inferred that it is important to apply artificially intelligent controllers to the flight control systems of aircraft in order to achieve superior time response, control command tracking accuracy and disturbance rejection.