Popis: |
In multi-hop wireless sensor networks (WSNs), sensors operate autonomously and make routing decisions independently. However, these devices are often located in remote or inaccessible areas and have limited energy and memory resources. As the network scales, efficient management to conserve resources and extend its lifetime becomes increasingly challenging. Software-defined WSNs (SDWSNs) offer a solution by enabling centralized control of low-power WSNs. However, continuously updating the controller with the network state generates significant traffic, resulting in energy loss, increased overhead, and reduced scalability and network lifetime. This study proposes a scalable SDWSN framework (SSDWSN) to address these challenges. The proposed approach focuses on scheduling, balanced routing, aggregation, and reducing traffic overhead caused by periodic network state updates to the controller. This paper presents the architecture of the proposed framework, along with the Deep Reinforcement Learning (DRL) agent. It also proposes two Proximal Policy Optimization (PPO)-based learning policies, namely PPO-ATCP and PPO-NSFP. These policies are designed to efficiently utilize SDWSN network resources and accurately predict the network state by continuously monitoring the synchronized network state within the controller, taking appropriate actions, and updating the learning parameters based on reward functions. The simulation results demonstrate the effectiveness of PPO-ATCP and PPO-NSFP in reducing controller-bound traffic overhead by 57% and 85%, respectively, while improving energy efficiency by 28% and 53% in SDWSNs. Additionally, PPO-NSFP achieved a minimum accuracy of 85% in network state prediction under different network-size scenarios. |