Você está na página 1de 9

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No.

4 ISSN: 1837-7823

Energy and Bandwidth based Fair Queue Scheduling Algorithm for MANET

D.Anitha Research Scholar , Karpagam University Dr.M.Punithavalli Dean , Sri Ramakrishna Engineering college Anithasuresh2003@gmail.com

Abstract
In the mobile ad hoc networks (MANETs), due to the multi-hop of the data, the limited bandwidth and the dynamic changes of the network topology, the network performance is hindered. This paper proposes an Energy and Bandwidth-based Fair Queue Scheduling (EBFQS) algorithm for MANET. Nodes thoroughly consider their own load states when forwarding packets. The priorities of packets are assigned according to the current nodes load level. When nodes are leisure, they should help other nodes to construct route first. In order to avoid network transmission delay increasing and packets losing, nodes should delay or forbid the construction of new route passing through them when their load level is high. The simulation results show that Energy and Bandwidth-based Queue Scheduling algorithm effectively decrease the network transmission delay, bandwidth and promote the network throughput, energy to a certain extent.

1. Introduction
MANET is an autonomous system of mobile nodes connected through wireless links. It does not have any fixed infrastructure. Nodes in a MANET keep moving randomly at varying speeds, resulting in continuously changing network topologies. Each node in the network serves as a router that forwards packets to other nodes. Each flow from the source to the destination traverses multiple hops of wireless links. MANET is quite different from distributed wireless LAN and wired network. There is no centralized control, and it is quite difficult for any single mobile host to have an accurate picture of the topology of the whole network. The scarcity of bandwidth implies that there should not be high communication overhead among various nodes. In addition, error-prone shared broadcast channel, hidden and exposed terminal problems, and constraints on battery power, all the above factors make providing end-toend packet delivery and delay guarantee in MANET a tough proposition. In this paper, we first re-examine the fairness notion in an ad-hoc network, and propose a new fairness model that captures the features of location-dependent contention and spatial reuse. The fair share of each packet flow is defined with respect to the corresponding flow contending graph. Each one-hop flow always receives a fair share in the bottleneck area of the network, represented by the high clique in the flow contending graph. We then propose a novel algorithm called EBFQS. EBFQS identifies the flows that currently receive reduced services in their locality, and ensures their access to the wireless channel.

D.Anitha is a Research Scholar in Karpagam University, Coimbatore. Dr.M. Punithavalli, Director, Sri Ramakrishna Engineering College, Coimbatore.

45

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

The scheduling coordination in EBFQS is localized within the flows one-hop neighborhood in the flow contending graph. EBFQS further enhances three dimensions. First, it offers larger fair share to each flow, characterized by the fair share in the maximum clique of the flow contending graph. Second, it is more resilient against incomplete and/or erroneous scheduling information caused by collisions. Finally, EBFQS realizes delay and throughput decoupling, leading to more efficient utilization of the wireless channel among applications with different delay/bandwidth requirements. Extensive simulation results demonstrate the effectiveness and efficiency of our localized and fully-distributed design.1

2. Literature Review
Fair queueing has been a popular scheduling paradigm in both wireline networks [1, 2] and packet cellular networks. However, these algorithms do not directly apply in a wireless ad-hoc network [3] due to location dependent channel contention, spatial channel reuse, and incomplete scheduling information [4]. Since wireless transmissions are local broadcast, contention for the shared channel is location dependent. This implies that packet scheduling is no longer a local decision at the sender, as opposed to that in the wireline or packet cellular networks. Instead, a node has to consider the scheduling decisions made by other neighboring nodes that share the same wireless channel. Moreover, a local scheduler implemented at a single node does not have explicit flow information, e.g., the packet arrival time and the flow status, regarding the contending flows originated from its neighbors. Therefore fair queueing in a wireless ad-hoc network is a distributed scheduling problem by nature. Finally, the wireless channel capacity is limited. Improving channel utilization through spatial channel reuse [5, 6], i.e., simultaneously scheduling flows that do not interfere with each other is highly desirable. Previous research on the improvement of MANET performance includes the efficient routing protocol and Medium Access Control (MAC) how to reduce MAC layer collisions and fairly share the bandwidth [6]. However, little attention has been paid to developing efficient queuing mechanisms [7] at the radio interfaces for effective operations of these routing protocols. Most MANETs use low-bandwidth radios. Thus, efficient queuing mechanisms can improve the effectiveness of routing protocols greatly. In [8] evaluated packet scheduling algorithms using DSR and GPRS routing protocol in MANET, and pointed that the benefit of giving priority of control packets over data packets depends on the used routing protocol. They find that setting priorities among data packets can decrease end-to-end packet delay significantly. In reference [9], view a node and its interfering nodes to form a neighborhood, concept to a distributed neighborhood queue. They aim to solve the TCP unfairness in MANET. Scheduling algorithms [10, 11] determine which packet is served next among the packets in the queues. Most current researches work on MANET, using a simple priority scheduling algorithm [13] for simulation, where data packets are scheduled in FIFO order, and all routing packets (RREQ, RREP and RERR) are given priority over data packets for transmission at the network interface queue. In this paper, energy and bandwidth based priority scheduling policy are investigated, aiming to improve aggregate performance for best-effort traffic. And a load-based scheduling algorithm is proposed. When a node has packets to be scheduled, it takes its own traffic load into account.

2.1 METHOD
In MANET the multiple roles of nodes as routers and terminals, and multi-hop forwarding of packets, and mobility induced frequent transmission of route packets may produce unique queue dynamics. The simple priority queue ensures that high priority packets are given unconditional preference over low priority packets, when the amount of routing traffic is exceptionally high, data traffic might not be transmitted at all or at a very low rate, which might result in the stagnation of data traffic. In addition, route failures and route changes due to node mobility may be frequent events in MANET. Routing messages tend to occur in burst, which makes the queue buffer overflow easily. Hence, always giving scheduling to routing packets is not a good idea. A scheduling scheme needs arises to overcome the above shortcomings. Energy and Bandwidth based Fair Queue Scheduling mechanism (EBFQS) is proposed which is shown in fig 1. Different queue scheduling policies are adopted according to the current load of node. Queue length is used as the load indicator and three load levels are defined by two thresholds Minth and Maxth. The first level is light load that the queue length is less than Minth. The second level is medium load that the queue length is between Minth and Maxth. And the last level is heavy load that the queue length is bigger than Maxth. Different scheduling policies are adopted at different load levels.

46

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

Fig 1: Energy and Bandwidth based Fair Queue Scheduling Mechanism

Priorities are given to all routing messages for scheduling during the light load. The node can provide enough buffers to hold all the incoming packets. So the node should help to finish the route discovery process quickly. During the medium load, the forwarding delay and load achieve a balance, and the node nearly works at its stable state. Priority scheduling RREQ and RREP will cause more packets rushing to this node, which will make the node overload. It is proposed to treat RREQ and RREP similarly as data packets to avoid congestion, which are scheduled in FIFO order. RERR messages are given priority as before. When the node is in heavy load, it only has few buffers left. However, the broadcasting of RREQ messages in search of routes in the network is a highly redundant process. The flooding of RREQ messages will fill up the buffer quickly. In order to recover from this severe state in time, all newly arriving RREQ messages are dropped. RREP messages that carry crucial routing information have much less redundancy, so the same priority as data packets is still given to them. And RERR messages are given priority as before. In the simple priority queue all the incoming routing messages are still inserted into the queues front regardless of the message type and whether the interface queue buffer is full. With the simple priority queue policy, a RREQ message, for example, may be replied although the buffer has been full of packets. In this situation, a successful transmission of a RREP will construct a routing path through this node, which will make the node more congested. However, a successful transmission of a RERR can save a large number of misdirected data packets coming later soon. Clearly, all routing messages should not be given priority for scheduling when the queue buffer is full. Giving RREQ message priority for scheduling can assure a new constructed route is the shortest. From the above discussion, it is known that the shortest path may be not the best when some intermediate node is congested. Based on the new constructed 47

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

route it should avoid nodes congestion, and a new drop policy that RREQ and RREP messages are given priority for dropping but RERR still has priority for scheduling is proposed. RREQ and RREP messages are dropped from the front of the queue until enough room is exposed for newly arriving data packets (newly arriving RREQ and RREP are dropped first). Otherwise, data packets are dropped from the tail. Energy Consumption Model The goal of our energy efficient packet scheduling algorithm is to adapt the output rate R(t) to match the instantaneous workload. At the same time, we would like to bound the performance impact that may result. Due to the convexity of the energyspeed curve, workload averaging results in higher energy savings. In fact, maximum energy savings would be obtained if we operated at the long term average input rate, thereby smoothing out all the input workload variations. However, buffering the input variations leads to a performance penalty due to an increase in packet delays. Therefore, the crux of the problem reduces to determining the degree of buffering (i.e., how much workload averaging to perform) while bounding the increase in packet delays. If every server always runs at Rmax, Weighted Fair Queueing guarantees the following end-to-end delay bound for each connection i,

(1) where Bi, Ci, Hi and Pi are respectively the burst size, connection rate, hop count and maximum packet size for connection i. Our major result is that for any rate-adaptive policy that uses rates between Rmin and Rmax and always uses rate Rmax when the queue exceeds a threshold U, if we schedule according to Weighted Fair Queueing then the end-to-end delay is bounded by

(2)

We begin our analysis by focusing on a single server in isolation. This will allow us to determine some of the basic tradeoffs between queue size and energy usage. First of all, we define a simple lower bound on energy consumption to keep the queue size bounded by Q, which shall be used as a benchmark henceforth. We then present a bound on the optimal tradeoff between queue size and energy efficiency that can be achieved by any rate adaptation policy. Recall that opt Q(t) is the minimum amount of energy required by time t if we wish to keep the queue bounded by Q at all times. Let opt Q(t, t) be defined similarly on the interval [t, t). We derive a simple bound on opt Q(t, t). For a single server s,

(3) Intuitively, guaranteeing a better bound on queue size should incur higher energy usage. We establish that such a tradeoff is inherently unavoidable.

48

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

(4) x2 x3, be a sequence that satisfies

(5)

which implies that the amount of data rate adaptation can serve is no more than Q/2. Equation (5) shows our proposed EBFQS used the less energy resource and minimized the bandwidth.

PERFORMANCE EVALUATION

In this section, we use simulations to evaluate the performance of EBFQS with different overlaying applications. EBFQS is implemented within the ns-2 simulator [12]. The radio model is based on the existing commercial hardware with a wireless transmission range of 270 meters and channel capacity of 3Mbps. Each simulation runs for 270 seconds and the results are compared with FQS. The applications of interest include, FTP-driven TCP traffic, CBR-driven (constant bit rate) UDP traffic, audio-driven UDP traffic and video-driven UDP traffic. For the FTP, CBR, and audio traffic, the modules in the ns-2 distribution are used. For the video traffic, we use the actual traces to drive the ns-2 simulations. All packets are set to be of 512 bytes, except that video traffic has varying packet sizes based on the actual traces. Dynamic Source Routing (DSR) is used for routing.

EXPERIMENTAL RESULTS AND DISCUSSION

Ns-2 is used for simulations. 30 mobile nodes are randomly placed in a rectangular area 1000 m750 m. Simple priority algorithms are compared with LBQS under different traffic loads. A Constant Bit Rate (CBR) source is used as the data source for each node. Each source node transmits packets at a certain rate. Traffic load is varied by changing the maximum number of connections. The random waypoint model is used. The maximum allowed speed for a node is 10 meters per second. The following performance metrics are used to compare the two scheduling algorithms: Average end-to-end delay, Energy usage, Throughput and Bandwidth.

49

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

50 40

Delay (sec)

30 20 10 0 0.2 0.4 0.6 Load factor EBFQS FQS 0.8 1 1.2

Fig 2: Delay Rate

The Figure 2 shows the average delay of packets under EBFQS and FQS, where the latter uses the queue-based rate adaptation policy is given. Similarly, Figure 3 shows energy usage. As expected, EBFQS exhibits significantly lower energy consumption for light loads, at the expense of higher packet delays.

50

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

Energy usage (joules)

5 4 3 2 1 0 0 0.1 0.2 0.3 0.4 0.5 Load factor EBFQS FQS 0.6 0.7 0.8 0.9 1

Fig 3: Energy Usage

700

Throughput (kbps)

600 500 400 300 200 0 10 20 30 40 packet flow EBFQS FQS 50 60 70 80

Figure 4 plots the resulting throughput as a function of the input packet flow. As expected, when the packet flow increases, throughput decreases. However, note that all packets complete transmission before their deadlines, thereby avoiding any timing violations in the system.

51

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

400

Bandwidth (Kbps)

350 300 250 200 0 10 20 30 40 packet flow EBFQS FQS 50 60 70 80

Fig5: Bandwidth

Finally, Figure 5 shows the impact of packet flow on the Bandwidth of our scheme, for different values of the packet arrival rate As can be seen in the figure, the packet flow of the system increases with increasing bandwidth. Our proposed EBFQS outperforms FQS.

CONCLUSION

In wireless embedded systems, communication energy is increasingly becoming the dominant component of total energy consumption. In this paper, we have targeted the packet scheduling process and investigated avenues for making it energy aware and bandwidth. We have presented energy efficient and bandwidth based Queue scheduling algorithm, EBFQS for regulated traffic streams, which provides significant energy savings (up to a factor of 10) with only a small, bounded increase in worst case packet delay. We have extended our scheme to a multiple input stream scenario, and have enhanced the weighted fair queuing algorithm for balancing and energy efficiency.

REFERENCES
[1] M. Andrews, A. Fernandez Anta, L. Zhang, and W. Zhao. Routing and scheduling for energy and delay minimization in the powerdown model. In IEEE INFOCOM 10, March 2010. [2] C. Perkins, E. Royer, S. Das, Ad-hoc on Demand Distance Vector (AODV) Routing, in IETF Internet Draft, February, 2003. [3] Wanyu Zhang, Meng Yu, Li Xie, Zhongxiu Sun, A Survey of On-demand Routing Protocols for Ad Hoc Mobile Networks, Chinese Journal of Computers, 25(10), 2002: pp. 1009-1017. [4] Longhuang Xiao, Brahim Bensaou, On Max-Min Fairness and Scheduling in Wireless Ad Hoc Networks: Analytical Framework and Implementation, In IEEE/ACM MobiHOC, Long Beach, CA, USA, 2001. [5] N. Bansal, T. Kimbrel, and K. Pruhs. Speed scaling to manage energy and temperature. Journal of the ACM, 54(1), 2007. [6] A. Wierman, L. L. H. Andrew, and A. Tang. Power-aware speed scaling in processor sharing systems. In IEEE INFOCOM 09, pages 2007 2015, 2009 [7] A. Demers, S. Keshav, and S. Shenker. Analysis and simulation of a fair queueing algorithm. Journal of Internetworking: Research and Experience, 1:326, 1990. [8] J. C. Bennett and H. Zhang. WF2Q: Worst-case Fair Weighted Fair Queueing. In IEEE INFOCOM, 1996. [9] L. M. Feeney and M. Nilsson. Investigating the energy consumption of a wireless network interface in an ad hoc networking environment. In IEEE INFOCOM, 2001. [10] P. Goyal, H. M. Vin, and H. Cheng. Start-time Fair Queueing: A Scheduling Algorithm for Integrated Services Packet Switching Networks. In ACM SIGCOMM, 1996. [11] X. L. Huang and B. Bensaou. On Max-min Fairness and Scheduling in Wireless Ad-Hoc Networks: Analytical Framework and Implementation. In ACM MOBIHOC, 2001. [12] NS-2. URL http://www.isi.edu/nsnam/ns/.

52

International Journal of Computational Intelligence and Information Security, April 2012 Vol. 3, No. 4 ISSN: 1837-7823

[13] RAGHUNATHAN, V., GANERIWAL, S., SCHURGERS, C., AND SRIVASTAVA, M. B. 2002a. E2WFQ: An energy efficient fair scheduling policy for wireless systems. In Proceedings of the Internation Symposium on Low Power Electronics and Design (ISLPED). ACM, New York, 3035

53

Você também pode gostar