Establishment of Lightweight-cost Re-routing Paths for the Network Update Algorithm FLIP

Autor: Lin, Meng-Han, 林孟翰
Rok vydání: 2018
Druh dokumentu: 學位論文 ; thesis
Popis: 106
In Software Defined Networks, there are many demands that pass packets. The routes of demands may encounter some situations that cause the route to change, such as node failures, periodic maintenance checks, or the capacity of current route traffic not enough. From a network state to another new network state, we call it a network update. Traditionally, network updates use a two-phase update. The two-phase network update means that the network will ensure that all final rules are installed before the old rules are removed from the router. This method of network update is a general solution, but it is very costly to use memory resources. After two-phase, a ordered rule replacement algorithm was proposed. This algorithm proposes that if the network is updated in the order of planning, it can avoid using too much memory in the network update. However, the order rule replacement algorithm may have no way to arrange the order when considering the network functions. Meanwhile, the network update algorithm FLIP provides a quick way to determine that some routers must install both states rules at the same time to complete the network update by using the concept of ordered rule replacement algorithm. By giving the network initial state and final state, the network update algorithm FLIP will schedule an order of network updates and will specify some routers that need to install both states rules. This shows that FLIP does not have the ability to prevent excessive memory usage, so this paper redesigned the final state of the network for FLIP to avoid consuming too much memory resources during network updates. We satisfy the needs of the service function chaining when redesigning the final state of the network. By comparing the shortest path which only consider service chain, the experimental results show that in smaller network topologies, we can prevent up to 52% of usage of memory resources. In larger network topologies, we can prevent up to 93% of usage of memory resources.
Databáze: Networked Digital Library of Theses & Dissertations