Abstract
In the era of rapid technological advancement, Mobile Edge Computing
(MEC) has become essential for supporting latency-sensitive applications
such as IoT, autonomous driving, and smart cities. However, efficient
resource allocation remains a challenge due to the dynamic nature of MEC
environments. This study proposes an advanced Multi-Agent Reinforcement
Learning (MARL) framework combined with a lightweight neural network,
LtNet, to optimize task offloading and resource management in MEC. The
MARL framework allows each device to learn its optimal offloading
strategy, while LtNet enhances computational efficiency using H-Swish
activation and selective Squeeze-and-Excitation modules. Experimental
results demonstrate that the proposed methods achieve a 12-22%
reduction in task completion time, a 5-8% decrease in energy
consumption, and consistently high resource utilization, making them
highly effective in managing dynamic MEC environments.