The PbGA-DDPG algorithm, which uses a potential-based GA-optimized reward shaping function, is a versatiledeep reinforcement learning/DRLagent that can control a vehicle in a complex environment without prior knowledge. However, when compared to an established deterministic controller, it consistently falls short in terms of landing distance accuracy. To address this issue, the HYDESTOC Hybrid Deterministic-Stochastic (a combination of DDPG/deep deterministic policy gradient and PID/proportional-integral-derivative) algorithm was introduced to improve terminal distance accuracy while keeping propellant consumption low. Results from extensive cross-validated Monte Carlo simulations show that a miss distance of less than 0.02 meters, landing speed of less than 0.4 m/s, settling time of 20 seconds or fewer, and a constant crash-free performance is achievable using this method.
Copyright: © 2024 Andiarti et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.