Task Management for Cooperative Mobile Edge Computing
Li-Tse Hsieh, Hang Liu, Yang Guo, Robert Gazda
Mobile edge computing (MEC) is an emerging paradigm that integrates computing capabilities in a wireless access network to execute computational tasks near mobile users with low latency. It is challenging to orchestrate the heterogeneous MEC edge nodes and remote cloud data centers to jointly process user tasks. Furthermore, in many mobile scenarios, the task arrivals and network states are unknown and time varying. In this paper, we propose a stochastic framework to enable the horizontal cooperation among geographically distributed MEC edge nodes and the vertical cooperation between edge nodes and cloud data centers, for achieving the optimal performance of user task processing under non-stationary task arrivals and network states. The task assignment optimization problem is formulated as a Markov decision process by considering the dynamic interactions and heterogeneity of the involved entities. The algorithm is then developed based on online reinforcement learning to capture system dynamics and learn the optimal task assignment policy with no requirement for prior knowledge of task arrival and network statistics. A function decomposition technique is also proposed to simplify the problem and reduce the state space. The evaluation results show that the proposed online learning-based scheme significantly outperforms the state-of-the-art baselines.
November 11-13, 2020
San Jose, CA
ACM/IEEE Workshop on Hot Topics on Web of Things 2020
Mobile edge computing (MEC), Reinforcement Learning, task assignment, stochastic optimization, Markov decision process (MDP), reinforcement learning