Wireless Mesh Network (WMN) is a self organized, wireless communication network that is used widely throughout the world in small networks. The whole network is wireless hence no extra cost of wiring is required. The WMN is a mesh of connected multiple routers and clients. The mesh routers are static radio nodes while the mesh clients are mobile devices. The radio nodes form the backbone of the network. Mesh client can be any wireless device that is accessing the internet (laptops, cell phones). The network maintains long distances connectivity, linking through the routers in small hops. In case a router goes down, the network heals itself and reconnects using a different shortest and reliable path. A set of the wireless mesh routers (radio nodes) are also used a gateways to connect to optical network. When a user sends a packet from his laptop or mobile to the WMN, it gets transmitted through the radio nodes through multiple hops until it reaches the gateway nodes which send the packet to the optical network which further sends it to the rest of the internet. Similarly when a packet is received it comes from the internet, through the gateway nodes, traverses the multiple routers till it reaches the mobile device (host). The whole set up is shown in Figure 1 where the highlighted nodes are the gateways that are connected to the rest of the internet.
Although [2, 3, 4, 5, 6] has worked very well on reducing delay from source to destination. [4] have taken the throughput into the account. In this author has defined a metric known as Expected Throughput (ETP). ETP will choose the path from source to the destination having the highest throughput. Although in this way the throughput is increased but the delay may not be catered properly because the path with the highest throughput may not be the shortest path. In this way a longer path may be selected and thus the packet will keep moving in the network. So ETP is not suitable for delay sensitive data. In [5], author has worked on minimizing the number of transmissions that are required to successfully transmit the packet from source to the destination. The author takes packet failure into account. For this purpose the author has defined a metric known as Expected Transmission Count (ETX). ETX value is based on the history of the links. A link having better transmission record is chosen for the packet forwarding. Although this is a good technique but again this metric may increase the throughput but may not reduce the packet delay that may incur. Having link or path having good transmission record may not be the shortest path. In [6], author has defined another metric, Expected Transmission Time (ETT). This metric chooses the path that has take shortest time from source to destination. This is a very good metric. All the metric that discussed above take only one thing into account i-e either Transmission time or number of transmissions that will be required or throughput etc. For the best path from source to destination there are so many things to consider like congestion, throughput, loss, delay, current load on the network, load balancing etc. So none of the above metrics, cater to these requirements.
Besides, other traditional algorithms like Shortest Path Routing Algorithm (SPRA), Minimum hop Routing Algorithm (MHRA), Predictive Throughput Routing Algorithm (PTRA) also do not work well in WMN scenario. As discussed above, shortest path may not be the best path. It may not have the required throughput, it may have congestion. Apart, if there is only one shortest path, the entire nodes will keep forwarding their packet on the shortest path, thus making the shortest path more congested thus giving rise to the problem of load balancing. Minimum Hop algorithm too, suffers from similar sort of problems. Suman Sarkar has proposed very good algorithm in [2]. It has identified four types of delays for WMN which are following.
Slot Synchronization Delay.
Queuing Delay
Transmission Delay
Propagation Delay
All the delays are calculated for an individual link, the result is added up. After that sum of these delays of all the links are added again to get the overall delay of the path from source to the destination. Thus the path with shortest delay is chosen as the best path and all the packets are forwarded on this very path. However, the simulation result shows poor load balancing, low throughput as compared to ETP and more congestion. In order to overcome the problem of [2], Abu Saem Reaz proposed "Capacity and Delay aware routing algorithm". CaDAR does no more then assigning capacities to the link before the arrival of the packet. This capacity is assigned based on LSAs. Since Capacity is assigned according to the packet size, thus congestion is avoided considerably. CaDAR handles delay and Congestion more then three times better the DARA. If we look at the procedure adapted by [2,3], they have introduced so many processing for calculating the shortest path. According to us, these so many processing may itself introduces delay.
So based on the above study we came to the following conclusion.
We can eliminate Slot Synchronization delay.
We can eliminate Queuing delay.
We can eliminate the large amount of processing introduced by [2] and [3] in order to calculate the shortest least delay path. As it not only consumes time, thus introducing delay but also consume processing power.
We can eliminate the need for LSAs. As they are overhead and calculation based on LSA may go wrong if inter LSA period is increased.
Proposed Solution
The link condition and the capacity required for a node is advertised in LSAs. LSAs are advertised periodically. So if after a LSA, link condition are changed, say for instance, flow on the link is changed, then the link will be either under-utilized or over-utilized and may lead to congestion, packet loss depending upon the whether the demand of the node was increased or decreased after that LSA. Time-Slots for various nodes and their data is adjusted also according to the LSA. So all the algorithm discussed above, heavily depends upon the LSA, thus in the inter-LSA period, the network condition may change dramatically, thus leading to the wrong estimation and calculations. Also the slot may be wrongly synchronized. And due to lack of network information the capacity may be wrongly assigned to the link thus leading to other offensive results like congestion, packet loss and so on. One solution to all these is to send LSA quite often but in this case a major chunk of bandwidth will be used by sending LSAs themselves. So In our proposed scheme, we will send a control packet ahead of the actual data packet. Thus there will be no need of LSAs. In this method, we will embed the sequence number and size of the data packet in the control packet. The control packet is actually the header of the data packet separated from the data packet. So for synchronization and identification purpose we use the sequence number. The control packet will contain fields for the sequence number and the size of the data. One extra field will have to be added in the data packet for the sequence number using the bits of reserved field. The data packet will travel ahead of the data packet, thus finding the shortest path with the least delay for the actual data packet. The data packet will follow the same path which was followed by the control packet. All the routing decision will be made and the shortest path with the least delay will be calculated for the control packet only. There will be no processing on the actual data. The intermediate node, just looking at the sequence number of the packet will forward it on the same path on which the control packet was forwarded. Also the control packet will tell each node exactly how much data is coming from a particular node. So in this way, capacities will also be assigned to the link according to the information of the control packet as it contains the size of the upcoming data packet. Thus not only there will be no processing on the data packet and capacities will be assigned to the links, but also the control packet will so synchronize the time slot at each node, so that the when the data reach the node, it will not have to wait for designated time slot and will be forwarded as it reaches the node.
Example of the Proposed Mechanism: A Scenario
Figure 2: A WMN for Scenario for our Example
Consider the Fig 2, above. Consider that End-User 2 generated data, suppose the sequence number of the data be 500. Now this sequence number and the size of the data packet will be embedded in the control packet. The only thing which will be included in the data packet will be a field for sequence number, so that the intermediate node recognizes this sequence number. Now suppose the control packet after reaching node A (since End-User is connected to the network via node A), finds out the shortest path with the least delay link to node B, at B it find the shortest with least delay link to node C and then to Destination "A". Based on the control packet, capacities will also be assigned to the links {AB} and {BC}. Also the intermediate node B and C will adjust the time slot according to the arrival of the data packet and then will not make any processing on the actual data packet, the only things which the intermediate node will do is to check the sequence number. Thus recognizing the sequence number, they will forward the data packet on the path on which the control packet was followed. Another advantage of the control packet is that, when the control packet reaches the node B and C, it will also so synchronize the time slots at the B and C, that as soon as the data packet arrive these nodes, its time slot also come. So the data packet will be forwarded as soon as it reaches the node without having to wait for designated time slot and thus there will be no queuing delay and time slot synchronization delay as well. In this way three types of delays namely, processing over head delay on the data packet, queuing delay and time slot synchronization delay, significantly impacting the overall performance of the WMN are reduced.
Figure 3 below shows data packet of various End-Users and their time slots. Packet will be forwarded in the time slots one by one. Since we know that time slots are synchronized on the basis of the LSAs. The control packet of a particular data packet will so synchronize the time slot that as soon as its data packets reaches the node's queue, its time slot will also come, so it will be forwarded without having to wait for it designated time slot. Again, the only thing which the intermediate node has to check is the sequence number of the data packet.
Figure 3: Queue of Packets and Their Time-Slots
Conclusion
We have shown a fine technique to reduce the overall delay and efficiently use the network resources. In [2,3], delay will have to be calculated for all the links and ultimately for the entire path, for each packet sent and then decision will have to be made regarding the shortest path with the least delay for every data packet. Also both the algorithm depends upon the LSA, since network condition may change even before next link state advertisement. So in this paper, we have shown an efficient mechanism in assigning capacities to the link and synchronizing the time slots of the nodes and a mechanism in which there will be no processing or routing decision on the data packet. The only thing which the intermediate node has to check is the sequence number of the data packet, and it will be automatically forwarded on the path on which the control packet was forwarded. Since there is no other processing on the data packet and also the control packet will also reserve the time for the data packet at intermediate nodes, so the packet does not have to wait in the queue for its designated time slot. Thus there will be significant reduction in the overall delay. Overall all the three types of delays will be eliminated, namely, queuing delay, time slot synchronization delay and processing delay. The need for LSAs is also eliminated. So since there are no LSAs, thus there will be an efficient utilization of resources.
Comparison
Routing Algorithms
Delay Handled?
Good Throughput?
Load Balancing?
Relying on LSAs?
Efficient Resource Utilization?
SPRA
Up to some extent
Not so much
No
Yes
No
MHRA
Up to some Extent
Not so much
No
Yes
No
PTRA
NO
Yes
Yes
Yes
No
[2]
Yes- but introduces processing delay
Yes
Yes
Yes
No - also do not assign any capacity-- congestion
[3]
Yes- but introduces processing delay
Yes
Yes
Yes
No
The Proposed Technique
Yes
Yes
Yes
No
Yes