Reinforcement learning-based optimal tracking control for uncertain multi-agent systems with uncertain topological networks

ISA Trans. 2024 Nov 29:S0019-0578(24)00554-8. doi: 10.1016/j.isatra.2024.11.043. Online ahead of print.

Abstract

Recent decades, extensive applications exemplified in intelligent connected vehicles (ICVs) and unmanned aerial vehicles (UAVs) have emerged with the rapidly development of multi-agent systems (MASs). Inspired by these applications, the optimal tracking control problem for uncertain MASs under uncertain topological networks is addressed based on the theory of observer design and reinforcement learning (RL). Thus, an adaptive extended observer based on concurrent learning (CL) technique is designed to simultaneously estimate system states and unknown parameters, where unknown parameters estimated convergence is guaranteed in a relaxed persistence of excitation condition. Moreover, a Luenberger observer is designed to estimate the state of the leader under uncertain topological networks, which acts as the information compensation of the leader. Via the proposed observers, an optimal tracking control algorithm is devised leveraging actor-critic (AC)-neural network (NN), which does not require the state derivative information. Lastly, a numerical simulation is performed to demonstrate the validity of the scheme in question.

Keywords: Actor-critic neural network; Concurrent learning; Optimal tracking control; Reinforcement learning; Uncertain multi-agent systems; Uncertain topological networks.