Skip to main content

Research Repository

Advanced Search

Improved Double Deep Q Network-Based Task Scheduling Algorithm in Edge Computing for Makespan Optimization

Zeng, Lei; Liu, Qi; Shen, Shigen; Liu, Xiaodong


Lei Zeng

Qi Liu

Shigen Shen


Edge computing nodes undertake more and more tasks as business density grows. How to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge. An edge task scheduling approach based on an improved Double Deep Q Network (Double DQN) is proposed in this paper. The Double DQN is adopted to separate the calculation of the target Q value and the selection of the action for the target Q value in two networks, and a new reward function is designed. Furthermore, a control unit is added to the experience replay unit of the agent. The management methods of experience data are modified to fully utilize the value of experience data and improve learning efficiency. Reinforcement learning agents usually learn from an ignorant state, which is inefficient. Therefore, a novel particle swarm optimization algorithm with an improved fitness function is proposed, which can generate optimal solutions for task scheduling. These optimized solutions are provided for the agent to pre-train network parameters, which allows the agent to get a better cognition level. The proposed algorithm is compared with the other six methods in simulation experiments. Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.

Journal Article Type Article
Acceptance Date Jun 27, 2023
Publication Date 2024-06
Deposit Date Aug 15, 2023
Publicly Available Date Jan 11, 2024
Print ISSN 1007-0214
Publisher Institute of Electrical and Electronics Engineers
Peer Reviewed Peer Reviewed
Volume 29
Issue 3
Pages 806 - 817
Keywords edge computing; task scheduling; reinforcement learning; makespan; Double Deep Q Network (DQN)
Public URL
Publisher URL


You might also like

Downloadable Citations