Fuzzy reinforcement learning based control of linear systems with input saturation

ISA Trans. 2025 Jan 3:S0019-0578(24)00633-5. doi: 10.1016/j.isatra.2024.12.045. Online ahead of print.

Abstract

This research introduces an innovative approach to optimal control for a class of linear systems with input saturation. It leverages the synergy of Takagi-Sugeno (T-S) fuzzy models and reinforcement learning (RL) techniques. To enhance interpretability and analytical accessibility, our approach applies T-S models to approximate the value function and generate optimal control laws while incorporating prior knowledge. By addressing the challenge of limited interpretability associated with conventional neural network utilization in RL, our approach utilizes segmented functions for saturation derivative characteristics approximation, effectively handling non-differentiability issues at saturation boundaries. Furthermore, our research presents a novel gradient identification method to overcome the impractical reliance on next-time-step State variables in RL for current-time-step policy improvements. This enables the derivation of optimal control laws corresponding to each fuzzy rule, ensuring practical applicability in the control field. The proposed methodology is rigorously evaluated through computer simulations, confirming its effectiveness, optimality, and convergence properties. This research contributes valuable insights and practical solutions to input-saturation control systems, offering a versatile and robust framework for real-world applications.

Keywords: Fuzzy reinforcement learning; Input saturation; Optimal control; T–S fuzzy model.