Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications

The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged tra...

全面介紹

書目詳細資料
發表在:IEEE Access
Main Authors: S. M. Asiful Huda, Sangman Moh
格式: Article
語言:英语
出版: IEEE 2023-01-01
主題:
在線閱讀:https://ieeexplore.ieee.org/document/10174639/
_version_ 1849421373355589632
author S. M. Asiful Huda
Sangman Moh
author_facet S. M. Asiful Huda
Sangman Moh
author_sort S. M. Asiful Huda
collection DOAJ
container_title IEEE Access
description The rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged transmission delay to the cloud, mobile edge computing (MEC) has evolved as a potential candidate that can improve task completion efficiency in a reliable fashion. Owing to its high mobile nature and ease of use, as promising candidates, unmanned aerial vehicles (UAVs) can be incorporated with MEC to support such computation-intensive and latency-critical applications. However, determining the ideal offloading decision for the UAV on basis of the task characteristics still remains a crucial challenge. In this paper, we investigate a surveillance application scenario of a hierarchical UAV swarm that includes an UAV-enabled MEC with a team of UAVs surveilling the area to be monitored. To determine the optimal offloading policy, we propose a deep reinforcement learning based computation offloading (DRLCO) scheme using double deep Q-learning, which minimizes the weighted sum cost by jointly considering task execution delay and energy consumption. A performance study shows that the proposed DRLCO technique significantly outperforms conventional schemes in terms of offloading cost, energy consumption, and task execution delay. The better convergence and effectiveness of the proposed method over conventional schemes are also demonstrated.
format Article
id doaj-art-efd9007618b341dcb2e7a0fccd2f59e9
institution Directory of Open Access Journals
issn 2169-3536
language English
publishDate 2023-01-01
publisher IEEE
record_format Article
spelling doaj-art-efd9007618b341dcb2e7a0fccd2f59e92025-08-20T03:43:55ZengIEEEIEEE Access2169-35362023-01-0111682696828510.1109/ACCESS.2023.329293810174639Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance ApplicationsS. M. Asiful Huda0https://orcid.org/0000-0002-7192-1654Sangman Moh1https://orcid.org/0000-0001-9175-3400Department of Computer Engineering, Chosun University, Gwangju, South KoreaDepartment of Computer Engineering, Chosun University, Gwangju, South KoreaThe rapid development of the Internet of Things and wireless communication has resulted in the emergence of many latency-constrained and computation-intensive applications such as surveillance, virtual reality, and disaster monitoring. To satisfy the computational demand and reduce the prolonged transmission delay to the cloud, mobile edge computing (MEC) has evolved as a potential candidate that can improve task completion efficiency in a reliable fashion. Owing to its high mobile nature and ease of use, as promising candidates, unmanned aerial vehicles (UAVs) can be incorporated with MEC to support such computation-intensive and latency-critical applications. However, determining the ideal offloading decision for the UAV on basis of the task characteristics still remains a crucial challenge. In this paper, we investigate a surveillance application scenario of a hierarchical UAV swarm that includes an UAV-enabled MEC with a team of UAVs surveilling the area to be monitored. To determine the optimal offloading policy, we propose a deep reinforcement learning based computation offloading (DRLCO) scheme using double deep Q-learning, which minimizes the weighted sum cost by jointly considering task execution delay and energy consumption. A performance study shows that the proposed DRLCO technique significantly outperforms conventional schemes in terms of offloading cost, energy consumption, and task execution delay. The better convergence and effectiveness of the proposed method over conventional schemes are also demonstrated.https://ieeexplore.ieee.org/document/10174639/Aerial computingcomputation offloadingdeep reinforcement learningdouble deep Q-learningmobile edge computingmulti-agent reinforcement learning
spellingShingle S. M. Asiful Huda
Sangman Moh
Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications
Aerial computing
computation offloading
deep reinforcement learning
double deep Q-learning
mobile edge computing
multi-agent reinforcement learning
title Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications
title_full Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications
title_fullStr Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications
title_full_unstemmed Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications
title_short Deep Reinforcement Learning-Based Computation Offloading in UAV Swarm-Enabled Edge Computing for Surveillance Applications
title_sort deep reinforcement learning based computation offloading in uav swarm enabled edge computing for surveillance applications
topic Aerial computing
computation offloading
deep reinforcement learning
double deep Q-learning
mobile edge computing
multi-agent reinforcement learning
url https://ieeexplore.ieee.org/document/10174639/
work_keys_str_mv AT smasifulhuda deepreinforcementlearningbasedcomputationoffloadinginuavswarmenablededgecomputingforsurveillanceapplications
AT sangmanmoh deepreinforcementlearningbasedcomputationoffloadinginuavswarmenablededgecomputingforsurveillanceapplications