Deep Reinforcement Learning-based Task Assignment for Cooperative Mobile Edge Computing
Li-Tse Hsieh, Hang Liu, Yang Guo, Robert Gazda
Mobile edge computing (MEC) integrates computing resources in wireless access networks to process computational tasks in close proximity to mobile users with low latency. This paper investigates the task assignment problem for cooperative MEC networks in which a set of geographically distributed heterogeneous edge servers not only cooperate with remote cloud data centers but also help each other to jointly process user tasks. We introduce a novel stochastic MEC cooperation framework to model the edge-to-edge horizontal cooperation and the edge-to-cloud vertical cooperation. The task assignment optimization problem is formulated by taking into consideration dynamic network states, uncertain node computing capabilities and task arrivals, as well as the heterogeneity of the involved entities. We then develop and compare three task assignment algorithms, based on different deep reinforcement learning (DRL) approaches, value-based, policy-based, and hybrid approaches. In addition, to reduce the search space and computation complexity of the algorithms, we propose decomposition and function approximation techniques by leveraging the structure of the underlying problem. The evaluation results show that the proposed DRL-based task assignment schemes outperform the existing algorithms, and the hybrid actor-critic scheme performs the best under dynamic MEC network environments.
, Liu, H.
, Guo, Y.
and Gazda, R.
Deep Reinforcement Learning-based Task Assignment for Cooperative Mobile Edge Computing, IEEE Transactions on Mobile Computing, [online], https://doi.org/10.1109/TMC.2023.3270242, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934260
(Accessed September 23, 2023)