Deep Reinforcement Learning for Edge Service Placement in Softwarized Industrial Cyber-Physical System
Future industrial cyber-physical system (CPS) devices are expected to request a large amount of delay-sensitive services that need to be processed at the edge of a network. Due to limited resources, service placement at the edge of the cloud has attracted significant attention. Although there are many methods of design schemes, the service placement problem in industrial CPS has not been well studied. Furthermore, none of existing schemes can optimize service placement, workload scheduling and resource allocation under uncertain service demands. To address these issues, we first formulate a joint optimization problem of service placement, workload scheduling, and resource allocation in order to minimize service response delay. We then propose an improved deep Q-network (DQN) based service placement (DSP) algorithm. The proposed algorithm can achieve an optimal resource allocation by means of convex optimization where the service placement and workload scheduling decisions are assisted by means of DQN technology. The experimental results verify that the proposed algorithm, compared with existing algorithms, can reduce the average service response time by 8%-10%.
Deep Reinforcement Learning for Edge Service Placement in Softwarized Industrial Cyber-Physical System, IEEE Transactions on Industrial Informatics, [online], https://dx.doi.org/10.1109/TII.2020.3041713, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=931542
(Accessed December 3, 2021)