Escolar Documentos
Profissional Documentos
Cultura Documentos
By
Mr. M.SHANMUGHARAJ
ASSISTANT PROFESSOR
being prepared by me and it meets the knowledge requirement of the university curriculum.
Name: M.Shanmugaraj
Signature of HD
Name:Dr.K.Pandiarajan
SEAL
UNIT-1 HIGH SPEED NETWORKS 1-19
THEOREM
SOME CONSIDERATIONS
CONTROL
TEXT BOOK
1. William Stallings, HIGH SPEED NETWORKS AND INTERNET, Pearson
Education, Second Edition, 2002.
REFERENCES
1. Warland, Pravin Varaiya, High performance communication networks, Second
Edition, Jean Harcourt Asia Pvt. Ltd., , 2001.
2. Irvan Pepelnjk, Jim Guichard, Jeff Apcar, MPLS and VPN architecture, Cisco Press,
Volume 1 and 2, 2003.
3. Abhijit S. Pandya, Ercan Sea, ATM Technology for Broad Band Telecommunication
Networks, CRC Press, New York, 2004.
CS2060 HIGH SPEED NETWORKS
Unit I
HIGH SPEED NETWORKS
Frame Relay often is described as a streamlined version of X.25, offering fewer of the robust
capabilities, such as windowing and retransmission of last data that are offered in X.25.
Frame Relay Devices
Devices attached to a Frame Relay WAN fall into the following two general categories:
Data terminal equipment (DTE)
Data circuit-terminating equipment (DCE)
DTEs generally are considered to be terminating equipment for a specific network and
typically are located on the premises of a customer. In fact, they may be owned by the
customer. Examples of DTE devices are terminals, personal computers, routers, and bridges.
DCEs are carrier-owned internetworking devices. The purpose of DCE equipment is to provide
clocking and switching services in a network, which are the devices that actually transmit data
through the WAN. In most cases, these are packet switches. Figure 10-1 shows the relationship
between the two categories of devices.
SCE 1 ECE
CS2060 HIGH SPEED NETWORKS
5. Information Field. A system parameter defines the maximum number of data bytes that
a host can pack into a frame. Hosts may negotiate the actual maximum frame length at
call set-up time. The standard specifies the maximum information field size
(supportable by any network) as at least 262 octets. Since end-to-end protocols
typically operate on the basis of larger information units, frame relay recommends that
the network support the maximum value of at least 1600 octets in order to avoid the
need for segmentation and reassembling by end-users.
Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit error-rate of
the medium, each switching node needs to implement error detection to avoid wasting
bandwidth due to the transmission of erred frames. The error detection mechanism used in
frame relay uses the cyclic redundancy check (CRC) as its basis.
The design of X.25 aimed to provide error-free delivery over links with high error-rates. Frame
relay takes advantage of the new links with lower error-rates, enabling it to eliminate many of
the services provided by X.25. The elimination of functions and fields, combined with digital
links, enables frame relay to operate at speeds 20 times greater than X.25.
X.25 specifies processing at layers 1, 2 and 3 of the OSI model, while frame relay operates at
layers 1 and 2 only. This means that frame relay has significantly less processing to do at each
node, which improves throughput by an order of magnitude.
X.25 prepares and sends packets, while frame relay prepares and sends frames. X.25 packets
contain several fields used for error and flow control, none of which frame relay needs. The
frames in frame relay contain an expanded address field that enables frame relay nodes to
direct frames to their destinations with minimal processing .
X.25 has a fixed bandwidth available. It uses or wastes portions of its bandwidth as the load
dictates. Frame relay can dynamically allocate bandwidth during call setup negotiation at both
the physical and logical channel level.
SCE 2 ECE
CS2060 HIGH SPEED NETWORKS
ATM is a cell-switching and multiplexing technology that combines the benefits of circuit
switching (guaranteed capacity and constant transmission delay) with those of packet switching
(flexibility and efficiency for intermittent traffic). It provides scalable bandwidth from a few
megabits per second (Mbps) to many gigabits per second (Gbps). Because of its asynchronous
nature, ATM is more efficient than synchronous technologies, such as time-division
multiplexing (TDM).
With TDM, each user is assigned to a time slot, and no other station can send in that time slot.
If a station has much data to send, it can send only when its time slot comes up, even if all
other time slots are empty. However, if a station has nothing to transmit when its time slot
comes up, the time slot is sent empty and is wasted. Because ATM is asynchronous, time slots
are available on demand with information identifying the source of the transmission contained
in the header of each ATM cell.
ATM transfers information in fixed-size units called cells. Each cell consists of 53
octets, or bytes. The first 5 bytes contain cell-header information, and the remaining 48 contain
the payload (user information). Small, fixed-length cells are well suited to transferring voice
and video traffic because such traffic is intolerant of delays that result from having to wait for a
large data packet to download, among other things. Figure illustrates the basic format of an
ATM cell. Figure :An ATM Cell Consists of a Header and Payload Data
ATM is almost similar to cell relay and packets witching using X.25and framerelay.like packet
switching and frame relay,ATM involves the transfer of data in discrete pieces.also,like packet
switching and frame relay ,ATM allows multiple logical connections to multiplexed over a
single physical interface. in the case of ATM,the information flow on each logical connection
is organised into fixed-size packets, called cells. ATM is a streamlined protocol with minimal
error and flow control capabilities :this reduces the overhead of processing ATM cells and
reduces the number of overhead bits required with each cell, thus enabling ATM to operate at
high data rates.the use of fixed-size cells simplifies the processing required at each ATM
node,again supporting the use of ATM at high data rates. The ATM architecture uses a logical
model to describe the functionality that it supports. ATM functionality corresponds to the
physical layer and
part of the data link layer of the OSI reference model. . the protocol referencce model shown
makes reference to three separate planes:
user plane provides for user information transfer ,along with associated controls (e.g.,flow
control ,error control).
control plane performs call control and connection control functions.
SCE 3 ECE
CS2060 HIGH SPEED NETWORKS
management plane includes plane management ,which performs management function related
to a system as a whole and provides coordination between all the planes ,and layer
management which performs management functions relating to resource and parameters
residing in its protocol entities .
The ATM reference model is composed of the following ATM layers:
Physical layerAnalogous to the physical layer of the OSI reference model, the ATM
physical layer manages the medium-dependent transmission.
ATM layerCombined with the ATM adaptation layer, the ATM layer is roughly
analogous to the data link layer of the OSI reference model. The ATM layer is responsible for
the simultaneous sharing of virtual circuits over a physical link (cell multiplexing) and passing
cells through the ATM network (cell relay). To do this, it uses the VPI and VCI information in
the header of each ATM cell.
ATM adaptation layer (AAL)Combined with the ATM layer, the AAL is roughly
analogous to the data link layer of the OSI model. The AAL is responsible for isolating higher-
layer protocols from the details of the ATM processes. The adaptation layer prepares user data
for conversion into cells and segments the data into 48-byte cell payloads.
Finally, the higher layers residing above the AAL accept user data, arrange it into packets, and
hand it to the AAL. Figure :illustrates the ATM reference model.
1.7 LOGICALCONNECTION
SCE 4 ECE
CS2060 HIGH SPEED NETWORKS
Virtual path connection (VPC)
Bundle of VCC with same end points
SCE 5 ECE
CS2060 HIGH SPEED NETWORKS
Network traffic management
Routing
Quality of service
Switched and semi-permanent channel connections
Call sequence integrity
Traffic parameter negotiation and usage monitoring
VPC only
Virtual channel identifier restriction within VPC
Semi-permanent
Customer controlled
Network controlled
An ATM cell consists of a 5 byte header and a 48 byte payload. The payload size of 48 bytes
was a compromise between the needs of voice telephony and packet networks, obtained by a
simple averaging of the US proposal of 64 bytes and European proposal of
32, said by some to be motivated by a European desire not to need echo-cancellers on national
trunks.
ATM defines two different cell formats: NNI (Network-network interface) and UNI (User-
network interface). Most ATM links use UNI cell format.
SCE 6 ECE
CS2060 HIGH SPEED NETWORKS
The PT field is used to designate various special kinds of cells for Operation and Management
(OAM) purposes, and to delineate packet boundaries in some AALs.
Several of ATM's link protocols use the HEC field to drive a CRC-Based Framing algorithm,
which allows the position of the ATM cells to be found with no overhead required beyond
what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit
header errors and detect multi-bit header errors. When multi-bit header errors are detected, the
current and subsequent cells are dropped until a cell with no header errors is found.
In a UNI cell the GFC field is reserved for a local flow control/submultiplexing system
between users. This was intended to allow several terminals to share a single network
connection, in the same way that two ISDN phones can share a single basic rate ISDN
connection. All four GFC bits must be zero by default.The NNI cell format is almost identical
to the UNI format, except that the 4-bit GFC field is re-allocated to the VPI field, extending the
VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212
VPs of up to almost 216 VCs each (in practice some of the VP and VC numbers are reserved).
Control traffic flow at user to network interface (UNI) to alleviate short term overload
Two sets of procedures
Uncontrolled transmission
Controlled transmission
Every connection either subject to flow control or not
Subject to flow control
SCE 7 ECE
CS2060 HIGH SPEED NETWORKS
May be one group (A) default
May be two groups (A and B)
Flow control is from subscriber to network
Controlled by network side
Terminal equipment (TE) initializes two variables
TRANSMIT flag to 1
GO_CNTR (credit counter) to 0
If TRANSMIT=1 cells on uncontrolled connection may be sent any time
If TRANSMIT=0 no cells may be sent (on controlled or uncontrolled connections)
If HALT received, TRANSMIT set to 0 and remains until NO_HALT
If TRANSMIT=1 and no cell to transmit on any uncontrolled connection:
If GO_CNTR>0, TE may send cell on controlled connection
Cell marked as being on controlled connection
GO_CNTR decremented
If GO_CNTR=0, TE may not send on controlled connection
TE sets GO_CNTR to GO_VALUE upon receiving SET signal
Null signal has no effect
USE OF HALT
Initialize condition, receiver error correction is default mode for single bit error
correction
After cell is received, HEC calculation & comparison is performed.
No error is detected, receiver remains error correction mode.
If error is detected, it checks for single or multi bit error
Mode is changed to detection mode.
622.08Mbps
SCE 8 ECE
CS2060 HIGH SPEED NETWORKS
155.52Mbps
51.84Mbps
25.6Mbps
Cell Based physical layer
SDH based physical layer
SCE 9 ECE
CS2060 HIGH SPEED NETWORKS
SCE 10 ECE
CS2060 HIGH SPEED NETWORKS
a. e.g. TCP based traffic.
Cells forwarded on FIFO basis.
Best efforts service.
i.e., no initial commitment is made to a UBR
Source & no feedback concerning congestion is provided.
Application using ABR specifies peak cell rate (PCR) and minimum cell rate
(MCR).
Resources allocated to give at least MCR..
Spare capacity shared among all ARB sources.
e.g. LAN interconnection.
Optimize handling of frame based traffic passing from LAN through router to ATM
backbone.
Used by enterprise, carrier and ISP networks.
Consolidation and extension of IP over WAN.
ABR difficult to implement between routers over ATM network.
GFR better alternative for traffic originating on Ethernet
a. Network aware of frame/packet boundaries.
b. When congested, all cells from frame discarded.
c. User was Guaranteed minimum capacity.
d. Additional frames carried out if not congested.
SCE 11 ECE
CS2060 HIGH SPEED NETWORKS
Convergence sublayer (CS)
Support for specific applications
AAL user attaches at SAP
Segmentation and re-assembly sublayer (SAR)
Packages and unpacks info received from CS into cells
Four types
Type 1
Type 2
Type 3/4
Type 5
AAL TYPE 1
AAL TYPE 2
SCE 12 ECE
CS2060 HIGH SPEED NETWORKS
AAL TYPE 3\4
AAL TYPE 5
Classical Ethernet
Bus topology LAN
10 Mbps
CSMA/CD medium access control protocol
2 problems:
A transmission from any station can be received by all stations
How to regulate transmission
Solution to First Problem
Data transmitted in blocks called frames:
User data
Frame header containing unique address of destination station
1.12 CSMA/CD
Carrier Sense Multiple Access/ Carrier Detection
If the medium is idle, transmit.
If the medium is busy, continue to listen until the channel is idle, then transmit
immediately.
If a collision is detected during transmission, immediately cease transmitting.
After a collision, wait a random amount of time, then attempt to transmit again (repeat from
step 1).
SCE 13 ECE
CS2060 HIGH SPEED NETWORKS
Hub
Transmission from a station received by central hub and retransmitted on all outgoing lines
Only one transmission at a time
Bridge
Frame handling done in software
SCE 14 ECE
CS2060 HIGH SPEED NETWORKS
Analyze and forward one frame at a time
Store-and-forward
Layer 2 Switch
Frame handling done in hardware
Multiple data paths and can handle multiple frames at a time
Can do cut-through
Incoming frame switched to one outgoing line
Many transmissions at same time
Layer 2 Switches
Flat address space
Broadcast storm
Only one path between any 2 devices
SCE 15 ECE
CS2060 HIGH SPEED NETWORKS
SCE 16 ECE
CS2060 HIGH SPEED NETWORKS
SCE 17 ECE
CS2060 HIGH SPEED NETWORKS
Distances up to 10 km
Small connectors
high-capacity
Greater connectivity than existing multidrop channels
Broad availability
Support for multiple cost/performance levels
Support for multiple existing interface command sets
Fibre Channel Protocol Architecture
FC-0 Physical Media
FC-1 Transmission Protocol
FC-2 Framing Protocol
FC-3 Common Services
FC-4 Mapping
SCE 18 ECE
CS2060 HIGH SPEED NETWORKS
Access Points perform the wireless to wired bridging function between networks
Wireless medium means of moving frames from station to station
Station computing devices with wireless network interfaces
Distribution System backbone network used to relay frames between access points
On wireless LAN, any station within radio range of other devices can transmit
Any station within radio range can receive
Authentication: Used to establish identity of stations to each other
Wired LANs assume access to physical connection conveys authority to connect
to LAN
Not valid assumption for wireless LANs
Connectivity achieved by having properly tuned antenna
Authentication service used to establish station identity
802.11 s upports several authentication schemes
Range from relatively insecure handshaking to public-key encryption schemes
802.11 requires mutually acceptable, successful authentication before
association
MAC layer covers three functional areas
Reliable data delivery
Access control
Security
Beyond our scope802.11 physical and MAC layers subject to
unreliability
Noise, interference, and other propagation effects result in loss of
frames
Even with error-correction codes, frames may not successfully be
received
Can be dealt with at a higher layer, such as TCP
However, retransmission timers at higher layers typically order of
seconds
More efficient to deal with errors at the MAC level
If noACK within short period of time, retransmit
802.11 includes frame exchange protocol
Station receiving frame returns acknowledgment (ACK) frame
Exchange treated as atomic unit
Not interrupted by any other station
SCE 19 ECE
CS2060 HIGH SPEED NETWORKS
Unit -02
CONGESTION AND TRAFFIC MANAGEMENT
A/B/S/K/N/Disc
where:
A is the interarrival time distribution
B is the service time distribution
S is the number of servers
K is the system capacity
N is the calling population
Disc is the service discipline assumed
Some standard notation for distributions (A or B) are:
M for a Markovian (exponential) distribution
E for an Erlang distribution with phases
D for Deterministic (constant)
G for General distribution
PH for a Phase-type distribution
Models
Construction and analysis
SCE 20 ECE
CS2060 HIGH SPEED NETWORKS
1. Identify the parameters of the system, such as the arrival rate, service time, Queue
capacity, and perhaps draw a diagram of the system.
2. Identify the system states. (A state will generally represent the integer number of
customers, people, jobs, calls, messages, etc. in the system and may or may not be
limited.)
3. Draw a state transition diagram that represents the possible system states and
identify the rates to enter and leave each state. This diagram is a representation of a
Markov chain.
4. Because the state transition diagram represents the steady state situation between
state there is a balanced flow between states so the probabilities of being in adjacent
states can be related mathematically in terms of the arrival and service rates and state
probabilities.
5. Express all the state probabilities in terms of the empty state probability, using the
inter-state transition relationships.
6. Determine the empty state probability by using the fact that all state probabilities
always sum to 1.
Whereas specific problems that have small finite state models are often able to be
analysed numerically, analysis of more general models, using calculus, yields useful
formulae that can be applied to whole classes of problems.
M/M/1// represents a single server that has unlimited queue capacity and infinite
calling population, both arrivals and service are Poisson (or random) processes,
meaning the statistical distribution of both the inter-arrival times and the service
times follow the exponential distribution. Because of the mathematical nature of the
exponential distribution, a number of quite simple relationships are able to be
derived for several performance measures based on knowing the arrival rate and
service rate.
This is fortunate because, an M/M/1 queuing model can be used to approximate
many queuing situations.
M/G/1// represents a single server that has unlimited queue capacity and infinite
calling population, while the arrival is still Poisson process, meaning the statistical
SCE 21 ECE
CS2060 HIGH SPEED NETWORKS
distribution of the inter-arrival times still follow the exponential distribution, the
distribution of the service time does not. The distribution of the service time may
follow any general statistical distribution, not just exponential. Relationships are still
able to be derived for a (limited) number of performance measures if one knows the
arrival rate and the mean and variance of the service rate. However the derivations a
generally more complex.
A number of special cases of M/G/1 provide specific solutions that give broad
insights into the best model to choose for specific queueing situations because they
permit the comparison of those solutions to the performance of an M/M/1 model.
SCE 22 ECE
CS2060 HIGH SPEED NETWORKS
How much time do customers spend in the restaurant? Do customers typically leave
the restaurant in a fixed amount of time? Does the customer service time vary with
the type of customer?
How many tables does the restaurant have for servicing customers?
The above three points correspond to the most important characteristics of a
queueing system. They are explained below:
Arrival Process The probability density distribution that determines
the customer arrivals in the system.
In a messaging system, this refers to the message
arrival probability distribution.
Service Process The probability density distribution that determines
the customer service times in the system.
In a messaging system, this refers to the message
transmission time distribution. Since message
transmission is directly proportional to the length of
the message, this parameter indirectly refers to the
message length distribution.
Number of Number of servers available to service the customers.
Servers In a messaging system, this refers to the number of
links between the source and destination nodes.
Based on the above characteristics, queueing systems can be classified by the
following convention:
A/S/n
Where A is the arrival process, S is the service process and n is the number of
servers. A and S are can be any of the following:
M (Markov) Exponential probability density
D (Deterministic) All customers have the same value
G (General) Any arbitrary probability distribution
Examples of queueing systems that can be defined with this convention are:
M/M/1: This is the simplest queueing system to analyze. Here the arrival and
service time are negative exponentially distributed (poisson process). The system
consists of only one server. This queueing system can be applied to a wide variety of
problems as any system with a very large number of independent customers can be
approximated as a Poisson process. Using a Poisson process for service time
however is not applicable in many applications and is only a crude approximation.
Refer to M/M/1 Queuing System for details.
M/D/n: Here the arrival process is poisson and the service time distribution is
deterministic. The system has n servers. (e.g. a ticket booking counter with n
cashiers.) Here the service time can be assumed to be same for all customers)
G/G/n: This is the most general queueing system where the arrival and service time
processes are both arbitrary. The system has n servers. No analytical solution is
known for this queueing system.
Markovian arrival processes
SCE 23 ECE
CS2060 HIGH SPEED NETWORKS
In queuing theory, Markovian arrival processes are used to model the arrival
customers to queue.
Some of the most common include the Poisson process, Markovian arrival process
and the batch Markovian arrival process.
Markovian arrival processes has two processes. A continuous-time Markov process
j(t), a Markov process which is generated by a generator or rate matrix, Q. The
other process is a counting process N(t), which has state space
(where is the set of all natural numbers). N(t) increases every time there is a
transition in j(t) which marked.
The Poisson arrival process or Poisson process counts the number of arrivals, each
of which has a exponentially distributed time between arrival. In the most general
case this can be represented by the rate matrix,
Markov arrival process
The Markov arrival process (MAP) is a generalisation of the Poisson process by
having non-exponential distribution sojourn between arrivals. The homogeneous
case has rate matrix,
Little's law
In queueing theory, Little's result, theorem, lemma, or law says:
The average number of customers in a stable system (over some time interval), N, is
equal to their average arrival rate, , multiplied by their average time in the system,
T, or:
as well as the whole thing. The only requirement is that the system is stable -- it can't
be in some transition state such as just starting up or just shutting down.
Let (t) be to some system in the interval [0, t]. Let (t) be the number of departures
from the same system in the interval [0, t]. Both (t) and (t) are integer valued
increasing functions by their definition. Let Tt be the mean time spent in the system
(during the interval [0, t]) for all the customers who were in the system during the
interval [0, t]. Let Nt be the mean number of customers in the system over the
duration of the interval [0, t].
SCE 24 ECE
CS2060 HIGH SPEED NETWORKS
If the following limits exist,
Ideal Performance
SCE 25 ECE
CS2060 HIGH SPEED NETWORKS
Backpressure
Request from destination to source to reduce rate
Useful only on a logical connection basis
Requires hop-by-hop flow control mechanism
Policing
Measuring and restricting packets as they enter the network
Choke packet
Specific message back to source
E.g., ICMP Source Quench
Implicit congestion signaling
SCE 26 ECE
CS2060 HIGH SPEED NETWORKS
integrity is not sacrificed because flow control can be left to higher-layer protocols.
Frame Relay implements two congestion-notification mechanisms:
Forward-explicit congestion notification (FECN)
Backward-explicit congestion notification (BECN)
FECN and BECN each is controlled by a single bit contained in the Frame Relay
frame header. The Frame Relay frame header also contains a Discard Eligibility
(DE) bit, which is used to identify less important traffic that can be dropped during
periods of congestion.
The FECN bit is part of the Address field in the Frame Relay frame header. The
FECN mechanism is initiated when a DTE device sends Frame Relay frames into
the network. If the network is congested, DCE devices (switches) set the value of the
frames' FECN bit to 1. When the frames reach the destination DTE device, the
Address field (with the FECN bit set) indicates that the frame experienced
congestion in the path from source to destination. The DTE device can relay this
information to a higher-layer protocol for processing. Depending on the
implementation, flow control may be initiated, or the indication may be ignored.
The BECN bit is part of the Address field in the Frame Relay frame header. DCE
devices set the value of the BECN bit to 1 in frames traveling in the opposite
direction of frames with their FECN bit set. This informs the receiving DTE device
that a particular path through the network is congested. The DTE device then can
relay this information to a higher-layer protocol for processing. Depending on the
implementation, flow-control may be initiated, or the indication may be ignored.
The Discard Eligibility (DE) bit is used to indicate that a frame has lower
importance than other frames. The DE bit is part of the Address field in the Frame
Relay frame header.
DTE devices can set the value of the DE bit of a frame to 1 to indicate that the frame
has lower importance than other frames. When the network becomes congested,
DCE devices will discard frames with the DE bit set before discarding those that do
not. This reduces the likelihood of critical data being dropped by Frame Relay DCE
devices during periods of congestion.
Fairness
SCE 27 ECE
CS2060 HIGH SPEED NETWORKS
Various flows should suffer equally.
Last-in-first-discarded may not be fair
Quality of Service (QoS)
Flows treated differently, based on need
Voice, video: delay sensitive, loss insensitive
File transfer, mail: delay insensitive, loss sensitive
Interactive computing: delay and loss sensitive
Reservations
Policing: excess traffic discarded or handled on best-effort basis
SCE 28 ECE
CS2060 HIGH SPEED NETWORKS
SCE 29 ECE
CS2060 HIGH SPEED NETWORKS
SCE 30 ECE
CS2060 HIGH SPEED NETWORKS
UNIT- 03
TCP AND CONGESTION CONTROL
SCE 31 ECE
CS2060 HIGH SPEED NETWORKS
Credit Policy:
Receiver needs a policy for how much credit to give sender
Conservative approach: grant credit up to limit of available buffer space
May limit throughput in long-delay situations
Optimistic approach: grant credit based on expectation of freeing space before data
arrives.
SCE 32 ECE
CS2060 HIGH SPEED NETWORKS
Normalized Throughput:
1 W > RD / 4
S =
4W W < RD / 4
RD
Complication Factor:
Multiple TCP connections are multiplexed over same network interface, reducing R
and efficiency
For multi-hop connections, D is the sum of delays across each network plus delays at
each router
If source data rate R exceeds data rate on one of the hops, that hop will be a bottleneck
Lost segments are retransmitted, reducing throughput. Impact depends on
retransmission policy.
Rewtransmission Fails:
TCP relies exclusively on positive acknowledgements and retransmission on
acknowledgement timeout
There is no explicit negative acknowledgement
Retransmission required when:
1. Segment arrives damaged, as indicated by checksum error, causing
receiver to discard segment
2. Segment fails to arrive.
Timers:
A timer is associated with each segment as it is sent
If timer expires before segment acknowledged, sender must retransmit
Key Design Issue:
value of retransmission timer
Too small: many unnecessary retransmissions, wasting network bandwidth
Too large: delay in handling lost segment.
Implementation Policy:
Send
Deliver
Accept
SCE 33 ECE
CS2060 HIGH SPEED NETWORKS
In-order
In-window
Retransmit
First-only
Batch
individual
Acknowledge
immediate
cumulative.
The rate at which a TCP entity can transmit is determined by rate of incoming ACKs to
previous segments with new credit
Rate of Ack arrival determined by round-trip path between source and destination
Bottleneck may be destination or internet
Sender cannot tell which
Only the internet bottleneck can be due to congestion
SCE 34 ECE
CS2060 HIGH SPEED NETWORKS
RTTVarianceEstimation
(Jacobsons Algorithm)
3 sources of high variance in RTT
If data rate relative low, then transmission delay will be relatively large, with larger
variance due to variance in packet size
Load may change abruptly due to other sources
Peer may not acknowledge segments immediately
Jacobsons Algorithm
g = 0.125
h = 0.25
f = 2 or f = 4 (most current implementations use f = 4)
SCE 35 ECE
CS2060 HIGH SPEED NETWORKS
Increase RTO each time the same segment retransmitted backoff process
Multiply RTO by constant:
RTO = q RTO
q = 2 is called binary exponential backoff
Which Round-trip Samples?
If an ack is received for retransmitted segment, there are 2 possibilities:
Ack is for first transmission
Ack is for second transmission
TCP source cannot distinguish 2 cases
No valid way to calculate RTT:
From first transmission to ack, or
From second transmission to ack?
Slow start
Dynamic window sizing on congestion
Fast retransmit
Fast recovery
Limited transmit
Slow Start
SCE 36 ECE
CS2060 HIGH SPEED NETWORKS
SCE 37 ECE
CS2060 HIGH SPEED NETWORKS
Fast Retransmit
RTO is generally noticeably longer than actual RTT
If a segment is lost, TCP may be slow to retransmit
TCP rule: if a segment is received out of order, an ack must be issued immediately for the
last in-order segment
Fast Retransmit rule: if 4 acks received for same segment, highly likely it was lost, so
retransmit immediately, rather than waiting for timeout
Fast Recovery
When TCP retransmits a segment using Fast Retransmit, a segment was assumed lost
Congestion avoidance measures are appropriate at this point
E.g., slow-start/congestion avoidance procedure
This may be unnecessarily conservative since multiple acks indicate segments are getting
through
Fast Recovery: retransmit lost segment, cut cwnd in half, proceed with linear increase of
cwnd
This avoids initial exponential slow-start
Limited Transmit
If congestion window at sender is small, fast retransmit may not get triggered, e.g., cwnd =
3
Under what circumstances does sender have small congestion window?
Is the problem common?
If the problem is common, why not reduce number of duplicate acks needed to trigger
retransmit?
SCE 38 ECE
CS2060 HIGH SPEED NETWORKS
Destination advertised window allows transmission of segment
Amount of outstanding data after sending is less than or equal to cwnd + 2
How best to manage TCPs segment size, window management and congestion
control
at the same time as ATMs quality of service and traffic control policies
TCP may operate end-to-end over one ATM network, or there may be multiple ATM
LANs or WANs with non-ATM networks
Observations
If a single cell is dropped, other cells in the same IP datagram are unusable, yet ATM
network forwards these useless cells to destination
Smaller buffer increase probability of dropped cells
SCE 39 ECE
CS2060 HIGH SPEED NETWORKS
Larger segment size increases number of useless cells transmitted if a single cell
dropped
Good performance of TCP over UBR can be achieved with minor adjustments to switch
mechanisms
This reduces the incentive to use the more complex and more expensive ABR service
Performance and fairness of ABR quite sensitive to some ABR parameter settings
SCE 40 ECE
CS2060 HIGH SPEED NETWORKS
Overall, ABR does not provide significant performance over simpler and less expensive
UBR-EPD or UBR-EPD-FBA
Introduction
Control needed to prevent switch buffer overflow
High speed and small cell size gives different problems from other networks
Limited number of overhead bits
ITU-T specified restricted initial set
I.371
ATM forum Traffic Management Specification 41
Overview
Congestion problem
Framework adopted by ITU-T and ATM forum
Control schemes for delay sensitive traffic
Voice & video
Not suited to bursty traffic
Traffic control
Congestion control
Bursty traffic
Available Bit Rate (ABR)
Guaranteed Frame Rate (GFR)
3.9 REQUIREMENTS FOR ATM TRAFFIC AND CONGESTION CONTROL
Most packet switched and frame relay networks carry non-real-time bursty data
No need to replicate timing at exit node
Simple statistical multiplexing
User Network Interface capacity slightly greater than average of channels
Congestion control tools from these technologies do not work in ATM
SCE 41 ECE
CS2060 HIGH SPEED NETWORKS
Transfer time depends on number of intermediate switches, switching time and
propagation delay. Assuming no switching delay and speed of light propagation, round
trip delay of 48 x 10-3 sec across USA
A dropped cell notified by return message will arrive after source has transmitted N
further cells
N=(48 x 10-3 seconds)/(2.8 x 10-6 seconds per cell)
=1.7 x 104 cells = 7.2 x 106 bits
i.e. over 7 Mbits
SCE 42 ECE
CS2060 HIGH SPEED NETWORKS
Total load accepted by network must be controlled
SCE 43 ECE
CS2060 HIGH SPEED NETWORKS
Minimum cell rate
Min commitment requested of network
Can be zero
Used with ABR and GFR
ABR & GFR provide rapid access to spare network capacity up to PCR
PCR MCR represents elastic component of data flow
Shared among ABR and GFR flows
Maximum frame size
Max number of cells in frame that can be carried over GFR connection
Only relevant in GFR
SCE 44 ECE
CS2060 HIGH SPEED NETWORKS
SCE 45 ECE
CS2060 HIGH SPEED NETWORKS
SCE 46 ECE
CS2060 HIGH SPEED NETWORKS
Network aware of and accommodates QoS of VCCs
SCE 47 ECE
CS2060 HIGH SPEED NETWORKS
CDVT
Requested conformance definition
QoS parameter requested and acceptable value
Network accepts connection only if it can commit resources to support requests
SCE 48 ECE
CS2060 HIGH SPEED NETWORKS
SCE 49 ECE
CS2060 HIGH SPEED NETWORKS
UPC Actions
Compliant cell pass, non-compliant cells discarded
If no additional resources allocated to CLP=1 traffic, CLP=0 cells C
If two level cell loss priority cell with:
CLP=0 and conforms passes
CLP=0 non-compliant for CLP=0 traffic but compliant for CLP=0+1 is tagged
and passes
CLP=0 non-compliant for CLP=0 and CLP=0+1 traffic discarded
CLP=1 compliant for CLP=0+1 passes
CLP=1 non-compliant for CLP=0+1 discarded
SCE 50 ECE
CS2060 HIGH SPEED NETWORKS
QoS for CBR, VBR based on traffic contract and UPC described previously
No congestion feedback to source
Open-loop control
Not suited to non-real-time applications
File transfer, web access, RPC, distributed file systems
No well defined traffic characteristics except PCR
PCR not enough to allocate resources
Use best efforts or closed-loop control
Best Efforts
Closed-Loop Control
SCE 51 ECE
CS2060 HIGH SPEED NETWORKS
Characteristics of ABR
Feedback Mechanisms
If CI=1
Reduce ACR by amount proportional to current ACR but not less than CR
Else if NI=0
Increase ACR by amount proportional to PCR but not more than PCR
If ACR>ER set ACR<-max[ER,MCR]
SCE 52 ECE
CS2060 HIGH SPEED NETWORKS
Nrm preset usually 32
Each FRM is returned by destination as backwards RM (BRM) cell
FRM typically CI=0, NI=0 or 1 ER desired transmission rate in range
ICR<=ER<=PCR
Any field may be changed by switch or destination before return
ATM Switch Rate Control Feedback
EFCI marking
Explicit forward congestion indication
Causes destination to set CI bit in ERM
Relative rate marking
Switch directly sets CI or NI bit of RM
If set in FRM, remains set in BRM
Faster response by setting bit in passing BRM
Fastest by generating new BRM with bit set
Explicit rate marking
Switch reduces value of ER in FRM or BRM
SCE 53 ECE
CS2060 HIGH SPEED NETWORKS
3.14 RM CELL FORMAT
SCE 54 ECE
CS2060 HIGH SPEED NETWORKS
Set EFCI on forward data cells or CI or NI on FRM or BRM
Three approaches to which to notify
Single FIFO queue
Multiple queues
Fair share notification
Multiple Queues
Separate queue for each VC or group of VCs
Separate threshold on each queue
Only connections with long queues get binary notifications
Fair
Badly behaved source does not affect other VCs
Delay and loss behaviour of individual VCs separated
Can have different QoS on different VCs
Fair Share
SCE 55 ECE
CS2060 HIGH SPEED NETWORKS
Typically =1/16
Bias to past values of CCR over current
Gives estimated average load passing through switch
If congestion, switch reduces each VC to no more than DPF*MACR
DPF=down pressure factor, typically 7/8
ER<-min[ER, DPF*MACR]
Load Factor
Adjustments based on load factor
LF=Input rate/target rate
Input rate measured over fixed averaging interval
Target rate slightly below link bandwidth (85 to 90%)
LF>1 congestion threatened
VCs will have to reduce rate
If LF<1 fairshare<-fairshare*min[ERU,1+(1-LF)*Rup]
If LF>1 fairshare<-fairshare*min[ERU,1-(1-LF)*Rdn]
ERU>1, determines max increase
Rup between 0.025 and 0.1, slope parameter
Rdn, between 0.2 and 0.8, slope parameter
ERF typically 0.5, max decrease in allottment of fair share
If fairshare < ER value in RM cells, ER<-fairshare
Simpler than ERICA
Can show large rate oscillations if RIF (Rate increase factor) too high
Can lead to unfairness
GRF Overview
SCE 56 ECE
CS2060 HIGH SPEED NETWORKS
User can reserve cell rate capacity for each VC
Application can send at min rate without loss
Network must recognise frames as well as cells
If congested, network discards entire frame
All cells of a frame have same CLP setting
CLP=0 guaranteed delivery, CLP=1 best efforts
Tagging identifies frames that conform to contract and those that dont
CLP=1 for those that dont
Set by network element doing conformance check
May be network element or source showing less important frames
Get lower QoS in buffer management and scheduling
Tagged cells can be discarded at ingress to ATM network or subsequent switch
Discarding is a policing function
Buffer Management
SCE 57 ECE
CS2060 HIGH SPEED NETWORKS
SCE 58 ECE
CS2060 HIGH SPEED NETWORKS
UNIT - 04
INTEGRATED AND DIFFERENTIATED SERVICES
INTRODUCTION
IPv4 header fields for precedence and type of service usually ignored
ATM only network designed to support TCP, UDP and real-time traffic
May need new installation
Need to support Quality of Service (QoS) within TCP/IP
Add functionality to routers
Means of requesting QoS
SCE 59 ECE
CS2060 HIGH SPEED NETWORKS
Delay
E.g. stock trading
Jitter - Delay variation
More jitter requires a bigger buffer
E.g. teleconferencing requires reasonable upper bound
Packet loss
Flow
IP packet can be associated with a flow
Distinguishable stream of related IP packets
From single user activity
Requiring same QoS
E.g. one transport connection or one video stream
Unidirectional
Can be more than one recipient
Multicast
Membership of flow identified by source and destination IP address, port numbers,
protocol type
IPv6 header flow identifier can be used but isnot necessarily equivalent to ISA flow
ISA Functions
Admission control
For QoS, reservation required for new flow
RSVP used
Routing algorithm
Base decision on QoS parameters
Queuing discipline
SCE 60 ECE
CS2060 HIGH SPEED NETWORKS
Take account of different flow requirements
Discard policy
Manage congestion
Meet QoS
SCE 61 ECE
CS2060 HIGH SPEED NETWORKS
Policing
Token Bucket
Many traffic sources can be defined by token bucket scheme
Provides concise description of load imposed by flow
Easy to determine resource requirements
Provides input parameters to policing function
Token Bucket Diagram
ISA SERVICES
Guaranteed Service
Assured capacity level or data rate
Specific upper bound on queuing delay through network
Must be added to propagation delay or latency to get total delay
Set high to accommodate rare long queue delays
No queuing losses
I.e. no buffer overflow
E.g. Real time play back of incoming signal can use delay buffer for incoming signal but
will not tolerate packet loss
SCE 62 ECE
CS2060 HIGH SPEED NETWORKS
Controlled Load
Tightly approximates to best efforts under unloaded conditions
No upper bound on queuing delay
High percentage of packets do not experience delay over minimum transit delay
Propagation plus router processing with no queuing delay
Very high percentage delivered
Almost no queuing loss
Adaptive real time applications
Receiver measures jitter and sets playback point
Video can drop a frame or delay output slightly
Voice can adjust silence periods
FIFO and FQ
PROCESSOR SHARING
Multiple queues as in FQ
Send one bit from each queue per round
SCE 63 ECE
CS2060 HIGH SPEED NETWORKS
Longer packets no longer get an advantage
Can work out virtual (number of cycles) start and finish time for a given packet
However, we wish to send packets, not bits
SCE 64 ECE
CS2060 HIGH SPEED NETWORKS
FIFO v WFQ
\
Proactive Packet Discard
Congestion management by proactive packet discard
Before buffer full
Used on single FIFO queue or multiple queues for elastic traffic
E.g. Random Early Detection (RED)
SCE 65 ECE
CS2060 HIGH SPEED NETWORKS
Characteristics of DS
Use IPv4 header Type of Service or IPv6 Traffic Class field
No change to IP
SCE 66 ECE
CS2060 HIGH SPEED NETWORKS
Service level agreement (SLA) established between provider (internet domain) and
customer prior to use of DS
DS mechanisms not needed in applications
Build in aggregation
All traffic with same DS field treated same
E.g. multiple voice connections
DS implemented in individual routers by queuing and forwarding based on DS field
State information on flows not saved by routers
Services
Provided within DS domain.Contiguous portion of Internet over which consistent
set of DS policies administered.Typically under control of one administrative entity,Defined in
SLA.Customer may be user organization or other DS domain.Packet class marked in DS
field.Service provider configures forwarding policies routers.Ongoing measure of performance
provided for each class.DS domain expected to provide agreed service internally.If destination
in another domain, DS domain attempts to forward packets through other domains.Appropriate
service level requested from each domain
SLA Parameters
Detailed service performance parameters
Throughput, drop probability, latency
Constraints on ingress and egress points
Indicate scope of service
Traffic profiles to be adhered to
Token bucket
Disposition of traffic in excess of profile
Example Services
Qualitative
A: Low latency
B: Low loss
Quantitative
C: 90% in-profile traffic delivered with no more than 50ms latency
D: 95% in-profile traffic delivered
Mixed
E: Twice bandwidth of F
F: Traffic with drop precedence X has higher delivery probability than that with
drop precedence Y
DS Field Detail
Leftmost 6 bits are DS codepoint
64 different classes available
3 pools
xxxxx0 : reserved for standards
000000 : default packet class
xxx000 : reserved for backwards compatibility with IPv4 TOS
xxxx11 : reserved for experimental or local use
SCE 67 ECE
CS2060 HIGH SPEED NETWORKS
xxxx01 : reserved for experimental or local use but may be allocated for future
standards if needed
Rightmost 2 bits unused
Configuration Diagram
SCE 68 ECE
CS2060 HIGH SPEED NETWORKS
DS Traffic Conditioner
Explicit Allocation
Superior to best efforts
Does not require reservation of resources
Does not require detailed discrimination among flows
Users offered choice of number of classes
Monitored at boundary node
In or out depending on matching profile or not
Inside network all traffic treated as single pool of packets, distinguished only as in or out
Drop out packets before in packets if necessary
Different levels of service because different number of in packets for each user
SCE 69 ECE
CS2060 HIGH SPEED NETWORKS
UNIT- 05
PROTOCOLS FOR QOS SUPPORT
INTRODUCTION
INCREASED DEMANDS
Need to incorporate bursty and stream traffic in TCP/IP architecture
Increase capacity
Faster links, switches, routers
Intelligent routing policies
End-to-end flow control
Multicasting
Quality of Service (QoS) capability
Transport protocol for streaming
SCE 70 ECE
CS2060 HIGH SPEED NETWORKS
Deal gracefully with changes in routes
Re-establish reservations
Control protocol overheadIndependent of routing protocol
RSVP Characteristics
Unicast and Multicast
Simplex
Unidirectional data flow
Separate reservations in two directions
Receiver initiated
Receiver knows which subset of source transmissions it wants
Maintain soft state in internet
Responsibility of end users
Providing different reservation styles
Users specify how reservations for groups are aggregated
Transparent operation through non-RSVP routers
Support IPv4 (ToS field) and IPv6 (Flow label field)
Flow Descriptor
Reservation Request
Flow spec
Desired QoS
Used to set parameters in nodes packet scheduler
Service class, Rspec (reserve), Tspec (traffic)
Filter spec
Set of packets for this reservation
Source address, source prot
SCE 71 ECE
CS2060 HIGH SPEED NETWORKS
Treatment of Packets of One Session at One Router
Filtering
G3 has reservation filter spec including S1 and S2
G1, G2 from S1 only
R3 delivers from S2 to G3 but does not forward to R4
G1, G2 send RSVP request with filter excluding S2
G1, G2 only members of group reached through R4
R4 doesnt need to forward packets from this session
R4 merges filter spec requests and sends to R3
R3 no longer forwards this sessions packets to R4
Handling of filtered packets not specified
SCE 72 ECE
CS2060 HIGH SPEED NETWORKS
Here they are dropped but could be best efforts delivery
R3 needs to forward to G3
Stores filter spec but doesnt propagate it
Reservation Styles
Determines manner in which resource requirements from members of group are
aggregated
Reservation attribute
Reservation shared among senders (shared)
Characterizing entire flow received on multicast address
Allocated to each sender (distinct)
Simultaneously capable of receiving data flow from each sender
Sender selection
List of sources (explicit)
All sources, no filter spec (wild card)
SCE 73 ECE
CS2060 HIGH SPEED NETWORKS
5.4 RSVP Protocol MECHANISMS
Two message types
Resv
Originate at multicast group receivers
Propagate upstream
Merged and packet when appropriate
Create soft states
Reach sender
Allow host to set up traffic control for first hop
Path
Provide upstream routing information
Issued by sending hosts
Transmitted through distribution tree to all destinations
RSVP is a transport layer protocol that enables a network to provide differentiated levels of
service to specific flows of data. Ostensibly, different application types have different
performance requirements. RSVP acknowledges these differences and provides the
mechanisms necessary to detect the levels of performance required by different appli-cations
and to modify network behaviors to accommodate those required levels. Over time, as time and
latency-sensitive applications mature and proliferate, RSVP's capabilities will become
increasingly important.
SCE 74 ECE
CS2060 HIGH SPEED NETWORKS
Background
Efforts to marry IP and ATM
IP switching (Ipsilon)
Tag switching (Cisco)
Aggregate route based IP switching (IBM)
Cascade (IP navigator)
All use standard routing protocols to define paths between end points
Assign packets to path as they enter network
Use ATM switches to move packets along paths
ATM switching (was) much faster than IP routers
Use faster technology
Developments
IETF working group in 1997, proposed standard 2001
Routers developed to be as fast as ATM switches
Remove the need to provide both technologies in same network
MPLS does provide new capabilities
QoS support
Traffic engineering
Virtual private networks
Multiprotocol support
Traffic Engineering
Ability to dynamically define routes, plan resource commitments based on known
demands and optimize network utilization
Basic IP allows primitive traffic engineering
E.g. dynamic routing
MPLS makes network resource commitment easy
Able to balance load in face of demand
Able to commit to different levels of support to meet user traffic requirements
Aware of traffic flows with QoS requirements and predicted demand
Intelligent re-routing when congested
VPN Support
Traffic from a given enterprise or group passes transparently through an internet
Segregated from other traffic on internet
Performance guarantees
Security
SCE 75 ECE
CS2060 HIGH SPEED NETWORKS
Multiprotocol Support
MPLS can be used on different network technologies
IP
Requires router upgrades
Coexist with ordinary routers
ATM
Enables and ordinary switches co-exist
Frame relay
Enables and ordinary switches co-exist
Mixed network
Explanation Setup
Labelled switched path established prior to routing and delivery of packets
QoS parameters established along path
Resource commitment
Queuing and discard policy at LSR
Interior routing protocol e.g. OSPF used
Labels assigned
SCE 76 ECE
CS2060 HIGH SPEED NETWORKS
Local significance only
Manually or using Label distribution protocol (LDP) or enhanced
version of RSVP
Notes
MPLS domain is contiguous set of MPLS enabled routers
Traffic may enter or exit via direct connection to MPLS router or from non-MPLS
router
FEC determined by parameters, e.g.
Source/destination IP address or network IP address
Port numbers
IP protocol id
Differentiated services codepoint
IPv6 flow label
Forwarding is simple lookup in predefined table
Map label to next hop
Can define PHB at an LSR for given FEC
Packets between same end points may belong to different FEC
SCE 77 ECE
CS2060 HIGH SPEED NETWORKS
Time to Live Processing
Needed to support TTL since IP header not read
First label TTL set to IP header TTL on entry to MPLS domain
TTL of top entry on stack decremented at internal LSR
If zero, packet dropped or passed to ordinary error processing (e.g. ICMP)
If positive, value placed in TTL of top label on stack and packet forwarded
At exit from domain, (single stack entry) TTL decremented
If zero, as above
If positive, placed in TTL field of Ip header and
Label Stack
Appear after data link layer header, before network layer header
Top of stack is earliest (closest to network layer header)
Network layer packet follows label stack entry with S=1
Over connection oriented services
Topmost label value in ATM header VPI/VCI field
Facilitates ATM switching
Top label inserted between cell header and IP header
In DLCI field of Frame Relay
Note: TTL problem
SCE 78 ECE
CS2060 HIGH SPEED NETWORKS
LSRs must be aware of LSP for given FEC, assign incoming label to LSP,
communicate label to other LSRs
Topology of LSPs
Unique ingress and egress LSR
Single path through domain
Unique egress, multiple ingress LSRs
Multiple paths, possibly sharing final few hops
Multiple egress LSRs for unicast traffic
Multicast
Route Selection
Selection of LSP for particular FEC
Hop-by-hop
LSR independently chooses next hop
Ordinary routing protocols e.g. OSPF
Doesnt support traffic engineering or policy routing
Explicit
LSR (usually ingress or egress) specifies some or all LSRs in LSP for given
FEC
Selected by configuration,or dynamically
Label Distribution
Setting up LSP
Assign label to LSP
Inform all potential upstream nodes of label assigned by LSR to FEC
Allows proper packet labelling
Learn next hop for LSP and label that downstream node has assigned to FEC
Allow LSR to map incoming to outgoing label
SCE 79 ECE
CS2060 HIGH SPEED NETWORKS
5.8 RTP ARCHITECTURE
Close coupling between protocol and application layer functionality
Framework for application to implement single protocol
Application level framing
Integrated layer processing
Multicast Support
Each RTP data unit includes:
SCE 80 ECE
CS2060 HIGH SPEED NETWORKS
Source identifier
Timestamp
Payload format
Relays
Intermediate system acting as receiver and transmitter for given protocol layer
Mixers
Receives streams of RTP packets from one or more sources
Combines streams
Forwards new stream
Translators
Produce one or more outgoing RTP packets for each incoming packet
E.g. convert video to lower quality
RTP Header
RTCP Functions
QoS and congestion control
Identification
Session size estimation and scaling
Session control
RTCP Transmission
Number of separate RTCP packets bundled in single UDP datagram
Sender report
Receiver report
SCE 81 ECE
CS2060 HIGH SPEED NETWORKS
Source description
Goodbye
Application specific
Packet Fields(SenderReport)
Sender Information Block
NTP timestamp: absolute wall clock time when report sent
RTP Timestamp: Relative time used to create timestamps in RTP packets
Senders packet count (for this session)
Senders octet count (for this session)
SCE 82 ECE
CS2060 HIGH SPEED NETWORKS
Receiver Report
Same as sender report except:
Packet type field has different value
No sender information block
Source Description Packet
Used by source to give more information
32 bit header followed by zero or more additional information chunks
E.g.:
0 END End of SDES list
1 CNAME Canonical name
2 NAME Real user name of source
3 EMAIL Email address
Goodbye (BYE)
Indicates one or more sources no linger active
Confirms departure rather than failure of network
SCE 83 ECE
CS2060 HIGH SPEED NETWORKS
UNIT-01
HIGH SPEED NETWORKS
PART-A
1. What is ATM?[MAY/JUNE-2012]
Asynchronous Transfer Mode (ATM) is a method for multiplexing and switching that
supports a broad range of services. ATM is a connection-oriented packet switching technique
that generalizes the notion of a virtual connection to one that provides quality-of-service
guarantees.
4. Define MPLS?
Multi Protocol Label Switching is to standardize a label switching paradigm that
integrates layer 2 switching with layer 3 routing. The device that integrates routing and
switching functions is called a Label Switching Router (LSR).
SCE 84 ECE
CS2060 HIGH SPEED NETWORKS
SCE 85 ECE
CS2060 HIGH SPEED NETWORKS
17. What are the quality service (QoS) parameters of connection-oriented services?
Cell Loss Ratio (CLR).
Cell Delay Variation (CDV).
Peak-to-Peak Cell Delay Variation (Peak-to-Peak CDV).
Maximum Cell Transfer Delay (Max CTD).
Mean Cell Transfer Delay (Mean CTD).
SCE 86 ECE
CS2060 HIGH SPEED NETWORKS
SCE 87 ECE
CS2060 HIGH SPEED NETWORKS
SCE 88 ECE
CS2060 HIGH SPEED NETWORKS
UNIT-02
CONGESTION AND TRAFFIC MANAGEMENT
PART-A
SCE 89 ECE
CS2060 HIGH SPEED NETWORKS
7. What is single server queue?[MAY/JUN-2014]
The control element of the system is a server, which provides some service to items. If
the server is idle an item is served immediately. Otherwise an arriving items joins awaiting
line.
Dispatching Discipline
Arrival Departure
Waiting line(Queue) Server
Residence Time
SCE 90 ECE
CS2060 HIGH SPEED NETWORKS
Waiting time
Items queued
Residence time
SCE 91 ECE
CS2060 HIGH SPEED NETWORKS
UNIT-03
TCP AND ATM CONGESTION CONTROL
PART-A
1. Define congestion.
Excessive network or internetwork traffic causing a general degradation of service.
5. Explain about the congestion control in a TCP/IP based internet implementation task.
IP is connectionless, stateless protocol that includes no provision for detecting,
much less controlling congestion.
TCP provides only end-to-end flow control and deduce the presence of
congestion.
There is no cooperative, distributed algorithm to bind together the various TCP
entities.
SCE 92 ECE
CS2060 HIGH SPEED NETWORKS
RTO = q * RTO . (1)
The equation causes RTO a grow exponentially with each retransmission. The most
commonly used value of q is 2.
10.What are the mechanisms used in ATM traffic control to avoid congestion
condition?[MAY/JUNE-2015]
Resource management.
Connection admission control
Usage parameter control
Traffic shaping
SCE 93 ECE
CS2060 HIGH SPEED NETWORKS
18. What are the mechanisms used in TCP to control congestion?
TCP congestion control mechanism:
a). RTO timer management
b). window management
19. What is meant by open loop and closed loop control in ABR mechanism?
Open loop control: If there is no feedback to the source concerning congestion, this
approach is called open loop control.
Closed loop control: ABR has feedback to the source concerning congestion; this
approach is called closed loop control.
20. What is meant by allowed cell rate (ACR)?[APR/MAY-2010]
Allowed cell rate: The current rate at which source is permitted to send or transmit cell
in ABR mechanism is called allowed cell rate.
21. Define Behavior Class Selector (BCS)
Behaviour Class Selector (BCS): BCS enables an ATM network to provide different
service levels among UBR connections by associating each connection with one of a set of
behaviour class.
22. What is cell delay variation?
In ATM cell network voice & video signals can be digitized & transmitted as a
system of cells. A key requirement especially for voice is that the delay across the network be
short. ATM is designed to minimize the processing & transmission overhead to the networks.
So that very fast cell switching & routing is possible.
23. Why retransmission policy essential in TCP?
TCP maintains a queue of segments that have been sent but not yet acknowledged. The
TCP specification states that TCP will retransmit a segment. If it fails to receive an
acknowledge within a given time. A TCP implement may employ one of three retransmission
strategies.
(i) First only
(ii) Batch
(iii) Individual
24. Why congestion control in a tcp/ip internet is complex?
The task is difficult one becoz of the following factor
(i)IP is a connectionless stateless protocol that includes no provision for detecting much
less controlling congestion.
(ii)TCP provides only end-to-end flow control.
(iii)There is no co-operative distributed algorithm.
23. Write relationship b/w throughput & TCP window size W.
S= 1 for W> RD/4
4W /RD for W< RD/4
Where
W TCP window size (octets)
R Data rate at TCP source available to a given TCP connection.
D Propagation delay b/w TCP source & destination over a given TCP
Connection.
SCE 94 ECE
CS2060 HIGH SPEED NETWORKS
capacity. The ABR mechanism uses explicit feedback to sources to assure that capacity is
facility allocated.
SCE 95 ECE
CS2060 HIGH SPEED NETWORKS
UNIT-04
INTEGRATED AND DIFFERENTIATED SERVICE
PART-A
1. Write down the two different, complementary IETF Standards traffic management
Frameworks?
Integrated services
Differentiated services
SCE 96 ECE
CS2060 HIGH SPEED NETWORKS
SCE 97 ECE
CS2060 HIGH SPEED NETWORKS
Routing protocol
16. List out the two principal functionality areas that accomplish forwarding packets in
the router.
Classifier and route selection.
Packet scheduler.
SCE 98 ECE
CS2060 HIGH SPEED NETWORKS
24. What are the design goals of RED algorithm?[MAY/JUNE-2013]
Congestion avoidance
Global synchronization avoidance
UNIT-05
PROTOCOLS FOR QOS SUPPORT
PART-A
V P X CC M PLT SQNO
TIME STAMP
SCE 99 ECE
CS2060 HIGH SPEED NETWORKS
SYNCHRONIZATION SOURCE IDENTIFIERS
(SSRC)
CONTRIBUTING SOURCE IDENTIFIER (CSRC)
.
.
.
.
CSRC IDENTIFIER
e) V Version (2 bit)
f) P padding (1 bit)
g) X Extension (1 bit)
h) CC CSRC count (4 bit)
i) M Marker (1 bit)
j) PLT Payload type (7 bit)
k) SQNO sequence no. (16 bit)
l) Time Stamp (32 bit)
10.Define QOS[MAY/JUNE-2012]
It refers to the properties of a network that contributes to the degree of satisfaction that
users perceive, relative to the networks performance.
12 Define RSVP?[MAY/JUNE-2011]
Resource Reservation Protocol was designed as an IP signaling protocol for
the integrated services model. RSVP can be used by a host to request a specific QoS
resource for a particular flow and by a router to provide the requested QoS along the
paths by setting up appropriate states.
14. What is the function of RTP relays and give its types?
A relay operating at a given protocol layer is an intermediate system that acts as both a
destination and a source in a data transfer.
UNIT-01
HIGH SPEED NETWORKS
PART-A
1. What is ATM?[MAY/JUNE-2012]
2. What are the main features of ATM?
3. What are the layers/plane of BISDN reference model?
4. Define MPLS?
5. What is called frame relay?
6. What are the advantages of DQDB MAC protocol?
7. Define VPI & VCI
8. Mention the High Speed LANs
9. What are the requirements for wireless LANs?[]MAY/JUNE-2014
10. What are the types of Ethernet?
11. Define VPN
12. Define ISDN?
13. What are the features of an ISDN?
14. What are the services of LAPD?
15. Define frame relay.
16. What are the traffic parameters of connection-oriented services?
17. What are the quality service (QoS) parameters of connection-oriented services?
18. Types of delays encountered by cells
19. What is the datalink control functions provided by LAPF?
20. Difference b/w AAL & AAL 3/5
21. What are the principles of ISDN ?
22. Difference b/w Frame relay and X.25 packet switching.[NOV/DEC-2012]
23. Give the neat sketch of ATM Protocol Architecture.
24. Draw the ATM Cell structure or Cell Format.[MAY/JUNE-2014,2013,NOV/DEC-2014]
PART-B
1. Discuss the various ATM service categories.[MAY/JUNE-2015,2013]
2. Explain the ATM Protocol architecture with a neat block diagram.[MAY/JUNE-2015,2013]
3. Explain the Frame Relay Networks with suitable diagram.[MAY/JUNE-2012]
4. Draw IEEE 802.11 architecture and Protocol architecture.[MAY/JUNE2013,NOV/DEC-2013]
5.Discuss the relevance of CSMA/CD in gigabit ethernets.[]MAY/JUNE-2012NOV/DEC-2012
6.Explain in detail about Fiber Channel.
UNIT-02
CONGESTION AND TRAFFIC MANAGEMENT
PART-A
PART-B
1. Explain Queuing theory.[APR/MAY-2015]
2. Explain Queuing Analysis and its types.[APR/MAY-2015]
3. Explain Traffic Management In Congestion Control.[MAY/JUNE-2012,NOV/DEC-2012]
4. Explain the Congestion Control Mechanisms.[NOV/DEC-2012]
UNIT-03
TCP AND ATM CONGESTION CONTROL
PART-A
1. Define congestion.
2. Define congestion control.[MAY/JUNE-2014]
3. List out the TCP implementation policy option.
4. List out the three retransmit strategies in TCP traffic control?[MAY/JUNE-2014]
5. Explain about the congestion control in a TCP/IP based internet implementation task.
6. list out retransmission timer management techniques[NOV/DEC-2010]
7. Write down the window management techniques.[NOV/DEC-2013]
8. Define binary exponential back off.[NOV/DEC-2012]
9. State the condition that must be met for a cell to conform.
10.What are the mechanisms used in ATM traffic control to avoid congestion
condition?[MAY/JUNE-2015]
11.How is times useful to control congestion in TCP?
12.What is the difference between flow control and congestion control?
13. What is reactive congestion control and preventive congestion control.
14. Why congestion control is difficult to implement in TCP?
15. What are the accept policies used in TCP traffic control?
16. What is meant by silly window syndrome?
17. What is meant by cell insertion time?
18. What are the mechanisms used in TCP to control congestion?
19. What is meant by open loop and closed loop control in ABR mechanism?
20. What is meant by allowed cell rate (ACR)?[APR/MAY-2010]
21. Define Behavior Class Selector (BCS)
22. What is cell delay variation?
23. Why retransmission policy essential in TCP?
24. Why congestion control in a tcp/ip internet is complex?
25.Write relationship b/w throughput & TCP window size W.
26. Define ABR[MAY/JUNE-2013]
27. Define CBR
28. Write the examples for CBR.
PART-B
PART-A
1. Write down the two different, complementary IETF Standards traffic management
Frameworks?
2. Write down the current traffic demand viewed by the IS provider?
3. Explain about differentiated services?
4. What are the requirements for inelastic traffic?[APR/MAY-2008]
5. Give some applications that come under elastic traffic.[NOV/DEC-2013]
6. State the drawbacks of FIFO queering discipline?[APR/MAY-2008]
7. Distinguish between inelastic and elastic traffic?[NOV/DEC-2009]
8. Define the format of DS field?
9. Define DS code point.
10. What is meant by traffic conditioning agreement?
11. Define DS boundary node.
12. Define DS interior node.
13. Define DS node.
14. Write down the two routing mechanism use in ISA.
15. List out the ISA components?
16. List out the two principal functionality areas that accomplish forwarding packets in
the router.
17. Define TSpec.
18. List out the categories of service in ISA.
19. List out the advantages of ISA.[APR/MAY-2010]
20. Define delay jitter.
21. What is meant by differentiated service?[MAY/JUNE-2012]
22. What is meant by integrated service?
23. Define global synchronization.
24. What are the design goals of RED algorithm?[MAY/JUNE-2013]
PART-B
1. Explain the block diagram for Integrated Services Architecture,and give details
about components.[ MAY/JUNE-2014,NOV/DEC-2013]
2. Explain the services offered Preferred by ISA [APR/MAY-2015]
3. Explain the various queuing disciplines in ISA .[MAY/JUNE-2013,NOV/DEC-2013,2012]
4. Explain the RED algorithm .[APR/MAY-2015,MAY/JUNE-2013 ,NOV/DEC-2013]
5. Explain Differentiated services briefly.[APR/MAY-2015,MAY/JUNE-2013,2014]
6. Write a short notes on DS per hop behaviour[NOV/DEC-2013].
UNIT-05
PROTOCOLS FOR QOS SUPPORT
PART-A
PART-B
1. Explain the Characteristics , goals of RSVP & the types of data flow[APR/MAY-2014,2015]
2. Explain the reservation style of the RSVP in detail.[NOV/DEC-2013,2012]
3. Explain the RSVP protocol operation and Mechanisms.[MAY/JUNE-2014]
4. Explain the MLPS architecture in detail[MAY/JUNE-2015,2013,NOV/DEC-2012]
5. Explain the RTP protocol architecture.[MAY/JUNE-2013,2015,NOV/DEC-2012]
6. Explain the RTP data transfer protocol.[NOV/DEC-2012]
www.vidyarthiplus.com
Seventh Semester
(Regulation 2008)
PART B (5 X 16 = 80 marks)
11. (a) (i) Explain about frame relay networks in detail with suitable diagram. (8)
(Or)
(b) (i) Describe in detail about Wifi and WiMax network application and
requirements. (8)
(ii) Explain about Gigabit Ethernet in detail with neat diagram. (8)
12. (a) (i) Explain in detail about frame relay congestion control technique. (8)
(Or)
(b) (i) Explain in detail about single server queues and its application. (8)
www.vidyarthiplus.com
www.vidyarthiplus.com
13. (a) (i) Explain in detail about KARNs algorithm and window management.(8)
(ii) Explain about network management in detail with neat sketch. (8)
(Or)
(b) (i) Explain in detail about clock instability and jitter measurements. (10)
14. (a) Explain in detail about queuing disciplines : BRFQ, WFQ, GPS, and PS.
(Or)
15. (a) Explain in detail about RTCP architecture and RIP protocol details.
(Or)
(b) Discuss about protocols used for QOS support with neat diagram.
----------------------------------
www.vidyarthiplus.com
www.vidyarthiplus.com
Reg.No. :
Question Paper Code : 21293
(Or)
(b) (i) Explain in detail about 802.11 architecture. (10)
(ii) Write short notes on:
(a) Wireless LANs.
(b) Wi-Fi networks.
(c) Wi-Max networks. (6)
12. (a) (i) Explain the Single Server Queuing model in detail. (10)
(ii) Discuss briefly the effects of congestion in networks. (6)
(Or)
(b) Write notes on congestion control used in :
(i) Packet Switching Networks.
(ii) Frame Relay Networks.
14. (a) (i) Briefly discuss the various queuing disciplines of integrated
services. (10)
www.vidyarthiplus.com
www.vidyarthiplus.com
-------------------------------
www.vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.vidyarthiplus.com
Reg No :
Seventh Semester
(Regulation 2008)
(Common to PTCS High Speed Networks For B.E. (Part Time) Seventh Semester
Electronics and Communication Engineering (Regulation 2009))
PART B (5 X 16 = 80 marks)
11. (a) (i) Explain the operation to AAL 1 and AAL with an example. (8)
(Or)
(b) (i) Illustrate why CSMA/CD is not suitable for wireless LANs. (8)
(ii) Draw the 802.11 protocol stack and discuss the functions of PCF and DCF. (8)
www.vidyarthiplus.com
www.vidyarthiplus.com
12. (a) (i) Explain in detail the following congestion control techniques.
(1) Back pressure. (4)
(2) Choke packet. (4)
(3) Explicit congestion signalling. (4)
(Or)
(b) (i) Explain the single server queuing model and its applications. (8)
(ii) Explain about traffic rate management in frame relay networks. (8)
13. (a) (i) Explain about TCP window management in detail. (8)
(ii) Explain the RTF variance estimation using Jacobsons algorithm in detail. (8)
(Or)
(b) (i) List and explain the ATM traffic parameter in detail. (8)
14. (a) (i) Explain the way in which ISA manages congestion and provides QOS
transport. (8)
(Or)
(b) Explain the differentiated services operation and the traffic conditioning functions
in detail.
15. (a) (i) List and explain the three RSVP reservation styles in detail. (9)
(Or)
(b) (i) Explain the RTP data transfer protocol architecture in detail. (8)
(ii) Explain the functions performed by the RTP control protocol and its packet types in
detail.
www.vidyarthiplus.com
www.vidyarthiplus.com
Reg.no :
Seventh Semester
(Regulation 2008/2010)
(Also Common to PTCS 2060 High Speed Networks for B.E. (Part-Time)
Seventh Semester Electronics and Communication Engineering
Regulation 2009)
PART B (5 X 16 = 80 marks)
www.vidyarthiplus.com
11. (a) (i) Explain the call control procedure in frame relay networks. (8)
(Or)
12. (a) (i) Explain with an example the implementation of single server
queues. (8)
(Or)
(b) (i) Explain the effects of congestion in packet switching networks. (8)
13. (a) (i) Explain the TCP timer management techniques in detail. (8)
(ii) Discuss in detail about the congestion control techniques followed
in ATM networks. (8)
(Or)
(b) (i) Explain in detail about ABR capacity allocation. (8)
(ii) Discuss in detail about ABR traffic control. (8)
14. (a) (i) Draw the Integrated service architecture and explain it in detail.
(10)
(ii) Explain the fair queuing in detail. (6)
(Or)
(b) (i) Explain in detail the way in which RED techniques overcomes
congestion. (8)
(ii) Write a notes on the DS per hop behaviour. (8)
15. (a) (i) Explain the reservation styles of the RSVP in detail. (8)
(Or)
www.vidyarthiplus.com
(ii) Explain the functions and message types of the RTP control
protocol. (8)
--------------------------------