Q-learning driven routing for aeronautical Ad-Hoc networks
Bilen, Tuğçe; Canberk, Berk
The aeronautical ad-hoc network (AANET) is one of the critical methodologies to satisfy the Internet connectivity requirement of airplanes during their flights. However, the ultra-dynamic topology and unstable air-to-air link characteristics increase the need for AANETs to a particular routing algorithm compared to the terrestrial networks. This need is mainly because these AANET-specific characteristics increase the delays, packet losses, and network load with accuracy reduction by continuously changing topology and breaking air-to-air links during routing. The works in the literature do not satisfy the ultra-dynamic topology and unstable air-to-air link characteristics of AANETs during routing. On the other hand, the routing algorithm can adapt to the dynamic conditions of AANETs by utilizing Artificial Intelligence (AI) based methodologies. For adaptation to this dynamic environment, we aim to let the airplanes find their routing path through exploration and exploitation by mapping the AANET environment to QLR. Clearly, this article proposes an updated Layered Hidden Markov Model (updated-LHMM) estimation-based Q-learning (QLR) routing for AANETs to solve the delay, packet loss, network load, and accuracy problems. For this aim, the Bellman Equation is adapted to the AANET environment by proposing different methodologies for its related QLR components. Results reveal that the proposed strategy mainly reduces the routing delay and packet losses by 30% and 33% compared to the methods in the literature.
Bilen, T., & Canberk, B. (2022). Q-learning driven routing for aeronautical Ad-Hoc networks. Pervasive and Mobile Computing, 87, Article 101724. https://doi.org/10.1016/j.pmcj.2022.101724
|Journal Article Type||Article|
|Acceptance Date||Nov 7, 2022|
|Online Publication Date||Nov 11, 2022|
|Deposit Date||Nov 15, 2022|
|Publicly Available Date||Nov 12, 2023|
|Journal||Pervasive and Mobile Computing|
|Peer Reviewed||Peer Reviewed|
|Keywords||AANETs, Routing management, Reinforcement learning, Q-learning, Hidden Markov model|
This file is under embargo until Nov 12, 2023 due to copyright reasons.
Contact firstname.lastname@example.org to request a copy for personal use.
You might also like
Digital Twin for 6G: Taxonomy, Research Challenges, and the Road Ahead
DTWN: Q-learning-based Transmit Power Control for Digital Twin WiFi Networks
Digital Twin-Enabled Intelligent DDoS Detection Mechanism For Autonomous Core Networks
Enabling Self-Organizing TDMA Scheduling for Aerial Swarms