Mobile edge computing offloads compute-intensive tasks generated on mobile wireless devices (WD) to edge servers (ES), which provides mobile users with low-latency computing services. Opportunistic computing offloading is effective to enhance computing performance in dynamic edge network environments; however, careless offloading of tasks to ESs can lead to WDs preempting network computing resources with limited bandwidth, thereby resulting in inefficient allocation of computing resources. To address these challenges, this paper proposes the density clustering and ensemble learning training-based deep reinforcement learning (DCEDRL) method for task offloading decision-making in mobile edge computing (MEC). Firstly, DCEDRL utilizes multiple deep neural networks to explore the environment. It trains multiple models using ensemble learning methods to obtain a combination of prediction results. Secondly, DCEDRL utilizes an optimized density clustering method to identify and classify computing tasks with similar characteristics to improve subsequent task scheduling and resource allocation efficiency. Finally, according to the stored priority information, DCEDRL utilizes the priority weight to resample the samples, adjust the sampling strategy in real time, and improve the adaptability and robustness of the system. Simulation results demonstrate that the proposed DCEDRL method can reduce the backlog of tasks by greater than over 21% compared to the baseline algorithms.
Keywords: Deep reinforcement learning; Density clustering; Ensemble learning; Mobile edge computing; Offloading decision.
© 2024. The Author(s).