With significant technology advancement and growing commercialization of edge/fog computing, edge AI (artificial intelligence) has now become a new frontier. Edge AI has multiple levels depending on the roles of the edge nodes (e.g., network edge nodes, user devices) in creating AI functions. At the base level, edge nodes use AI and machine learning (ML) functions created somewhere else (e.g., in the cloud) but do not participate in creating these functions. At the top level, edge learn learn from their local data (i.e., edge learning) to help build AI/ML models for not just themselves but also other network entities and user applications.
As a vast and rapidly growing amount of data is being created at the network edges and cannot all be sent to the cloud as before, edge learning is becoming necessary and essential for training future ML models for advanced networks, computing infrastructures, and user applications. Edge learning, however, faces fundamental challenges that existing machine learning techniques cannot adequately address. Such challenges include resource constraints, non-identical or non-independent (non-IID) data distributions, data privacy requirements, communication constraints, and more security vulnerabilities.
This project: