Você está na página 1de 45

UNIT-2

Data Link Layer is second layer of OSI Layered Model. It is responsible for converting data

stream to signals bit by bit and to send that over the underlying hardware. At the receiving

end, Data link layer picks up data from hardware which are in the form of electrical

signals, assembles them in a recognizable frame format, and hands over to upper layer
Data link layer has two sub-layers:
Logical Link Control: It deals with protocols, flow-control, and error control
Media Access Control: It deals with actual control of media
Functionality of Data-link Layer
Data link layer does many tasks on behalf of upper layer. These are:
Framing
Data-link layer takes packets from Network Layer and encapsulates them into
Frames.Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data link
layer picks up signals from hardware and assembles them into frames.
Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is
assumed to be unique on the link. It is encoded into hardware at the time of
manufacturing.
Synchronization
When data frames are sent on the link, both machines must be synchronized in order to
transfer to take place.
Error Control
Sometimes signals may have encountered problem in transition and the bits are
flipped.These errors are detected and attempted to recover actual data bits. It also provides
error reporting mechanism to the sender.
Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow
control that enables both machine to exchange data on same speed.
Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of
collision. Data-link layer provides mechanism such as CSMA/CD to equip capability of
accessing a shared media among multiple Systems.
In the physical layer, data transmission involves synchronised transmission of bits from the
source to the destination.
The data link layer packs these bits into frames.
Data-link layer takes the packets from the Network Layer and encapsulates them into
frames. If the frame size becomes too large, then the packet may be divided into small sized
frames. Smaller sized frames makes flow control and error control more efficient.
Then, it sends each frame bit-by-bit on the hardware. At receiver’s end, data link layer
picks up signals from hardware and assembles them into frames.

Parts of a Frame
A frame has the following parts −
Frame Header − It contains the source and the destination addresses of the frame.
Payload field − It contains the message to be delivered.
Trailer − It contains the error detection and error correction bits.
Flag − It marks the beginning and end of the frame.

Types of Framing
Framing can be of two types, fixed sized framing and variable sized framing.
Fixed-sized Framing
Here the size of the frame is fixed and so the frame length acts as delimiter of the frame.
Consequently, it does not require additional boundary bits to identify the start and end of
the frame.
Example − ATM cells.
Variable – Sized Framing
Here, the size of each frame to be transmitted may be different. So additional mechanisms
are kept to mark the end of one frame and the beginning of the next frame.
It is used in local area networks.
The four framing methods that are widely used are
 Character count
 Starting and ending characters, with character stuffing
 Starting and ending flags, with bit stuffing
 Physical layer coding violations
Character Count
This method uses a field in the header to specify the number of characters in the frame.
When the data link layer at the destination sees the character count,it knows how many
characters follow, and hence where the end of the frame is. The disadvantage is that if the
count is garbled by a transmission error, the destination will lose synchronization and will
be unable to locate the start of the next frame. So, this method is rarely used.
Bytestuffing–
If the pattern of the flag byte is present in the message byte, there should be a strategy so
that the receiver does not consider the pattern as the end of the frame. In character –
oriented protocol, the mechanism adopted is byte stuffing.
In byte stuffing, a special byte called the escape character (ESC) is stuffed before every byte
in the message with the same pattern as the flag byte. If the ESC sequence is found in the
message byte, then another ESC byte is stuffed before it.

Example:

Bitstuffing–

Mostly flag is a special 8-bit pattern “01111110” used to define the beginning and the end
of the frame. In order to differentiate the message from the flag in case of the same
sequence, a single bit is stuffed in the message. Whenever a 0 bit is followed by five
consecutive 1bits in the message, an extra 0 bit is stuffed at the end of the five 1s.
When the receiver receives the message, it removes the stuffed 0s after each sequence of
five 1s. The un-stuffed message is then sent to the upper layers.

Physical layer coding violations

The final framing method is physical layer coding violations and is applicable to networks

in which the encoding on the physical medium contains some redundancy. In such cases
normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The combinations of low-

low and high-high which are not used for data may be used for marking frame boundaries.

Block Coding:
 In block coding, we divide our message into blocks, each of k bits, called datawords.
 We add r redundant bits to each block to make the total length to n = k + r. The resulting
n-bit blocks are called codewords.
 With k bits, we can create a combination of 2k datawords; with n bits, we can create a
combination of 2n codewords.
 The block coding process is a one-to-one; the same dataword is always encoded as the
same codeword. This means that we have 2n - 2k codewords that are not used. We call this
codewords invalid or illegal. Following figure shows the situation.

Figure:: datawords and codewords in block coding


Error Detection
If the following two conditions are met, the receiver can detect a change in the original
codeword.
1. The receiver can find a list of valid codewords.
2. The original codeword has changed to an invalid one

The sender creates codewords out of datawords by using a generator that applies the rules
and procedures of encoding. Each codeword sent to the receiver may change during
transmission. If the received codeword is the same as one of the valid codewords, the word
is accepted; the corresponding dataword is extracted for use. If the received codeword is not
valid, it is discarded. However, if the codeword is corrupted during transmission but the
received word still matches a valid codeword, the error remains undetected.
Error Correction
In error correction, the receiver needs to find (or guess) the original codeword sent. We can
say that we need more redundant bits for error correction than for error detection.
What is an Error
The data can be corrupted during transmission (from source to receiver). It may be affected
by external noise or some other physical imperfections. In this case, the input data is not
same as the received output data. This mismatched data is called “Error”.
The data errors will cause loss of important / secured data. Even one bit of change in data
may affect the whole system’s performance. Generally the data transfer in digital systems
will be in the form of ‘Bit – transfer’. In this case, the data error is likely to be changed in
positions of 0 and 1 .

Types Of Errors
In a data sequence, if 1 is changed to zero or 0 is changed to 1, it is called “Bit error”.
There are generally 3 types of errors occur in data transmission from transmitter to
receiver. They are
• Single bit errors
• Multiple bit errors
• Burst errors
Single Bit Data Errors
The change in one bit in the whole data sequence , is called “Single bit error

Multiple Bit Data Errors


If there is change in two or more bits of data sequence of transmitter to receiver, it is called
“Multiple bit error”..
Burst Errors
The change of set of bits in data sequence is called “Burst error”. The burst error is
calculated in from the first bit change to last bit change.

Types of Error detection

1. Parity Checking
2. Cyclic Redundancy Check (CRC)
3. Longitudinal Redundancy Check (LRC)

Parity check
In computers, parity (from the Latin paritas, meaning equal or equivalent) is a technique
that checks whether data has been lost or written over when it is moved from one place in
storage to another or when it is transmitted between computers.
How parity works

Because data transmission is not an entirely error-free process, data is not always received
in the same way as it was transmitted. A parity bit adds checksums into data that enable
the target device to determine whether the data was received correctly.
An additional binary digit, the parity bit, is added to a group of bits that are moved together.
This bit, sometimes referred to as a check bit, is used only to identify whether the moved
bits arrived successfully.
Even parity bit vs. odd parity bit

There are two kinds of parity bits:

 In even parity, the number of bits with a value of one are counted. If that
number is odd, the parity bit value is set to one to make the total number of ones in the set
(including the parity bit) an even number. If the number of bits with a value of one is even,
the parity bit value is set to zero, so that the total number of ones in the set (including the
parity bit) remains an even number.

 In odd parity, if the number of bits with a value of one is an even number, the
parity bit value is set to one to make the total number of ones in the set (including the
parity bit) an odd number. If the number of bits with a value of one is odd, the parity bit
value is set to zero, so that the total number of ones in the set (including the parity bit)
remains an odd number.

At the receiving end, each group of incoming bits is checked to see if the group totals to an
even or odd number. If a transmission error occurs, the transmission is retried or the
system halts and an error message is sent to the user.

Parity Check Example-


Consider the data unit to be transmitted is 1001001 and even parity is used.
Then,
At Sender Side-

 Total number of 1’s in the data unit is counted.


 Total number of 1’s in the data unit = 3.
 Clearly, even parity is used and total number of 1’s is odd.
 So, parity bit = 1 is added to the data unit to make total number of 1’s even.
 Then, the code word 10010011 is transmitted to the receiver.

At Receiver Side-

 After receiving the code word, total number of 1’s in the code word is counted.
 Consider receiver receives the correct code word = 10010011.
 Even parity is used and total number of 1’s is even.
 So, receiver assumes that no error occurred in the data during the transmission.

Advantage-
This technique is guaranteed to detect an odd number of bit errors (one, three, five and so
on).
If odd number of bits flip during transmission, then receiver can detect by counting the
number of 1’s.
Parity check bits are calculated for each row, which is equivalent to a simple parity check
bit. Parity check bits are also calculated for all columns, then both are sent along with the
data. At the receiving end these are compared with the parity bits calculated on the received
data.

Two-dimensional parity check:


A better approach is the two-dimensional parity check. In this method, the dataword is
organized in a table (rows and columns). Following Figure, the data to be sent, five 7-bit
bytes, are put in separate rows. For each row and each column, 1 parity-check bit is
calculated. The whole table is then sent to the receiver, which finds the syndrome for each
row and each column. As Following Figure shows, the two-dimensional parity check can
detect up to three errors that occur anywhere in the table (arrows point to the locations of
the created nonzero syndromes). However, errors affecting 4 bits may not be detected.

Figure
Longitudinal Redundancy Check (LRC)
In this error detection method, a block of bits is organized in a table with rows and
columns. Then the parity bit for each column is calculated and a new row of eight bits,
which are the parity bits for the whole block, is created. After that the new calculated
parity bits are attached to the original

data and sends to the receiver.


LRC increases the likelihood of detecting burst error. An LRC of n bits can easily detects a
burst error of n bits.
However, if two bits in one data unit are damaged and two bits in exactly the same
positions in another data unit are also damaged, the LRC checker will not detect
an error.
How LRC Fail to Detect the Burst Noise
Notice that although the 5th bit and the 7th bit for 1st and 2 nd data unit have been changed
but the LRC calculated by receiver is still the same as the LRC received. Thus the receiver
checker cannot detect this burst error.

Cyclic redundancy check (CRC) Cyclic Redundancy Check (CRC) An error detection
mechanism in which a special number is appended to a block of data in order to detect any
changes introduced during storage (or transmission).
• At the sender side, the data unit to be transmitted IS divided by a predetermined divisor
(binary number) in order to obtain the remainder. This remainder is called CRC.
• The CRC has one bit less than the divisor. It means that if CRC is of n bits, divisor is of n+
1 bit.
• The sender appends this CRC to the end of data unit such that the resulting data unit
becomes exactly divisible by predetermined divisor i.e. remainder becomes zero.

At the destination, the incoming data unit i.e. data + CRC is divided by the same number
(predetermined binary divisor).
• If the remainder after division is zero then there is no error in the data unit & receiver
accepts it.
• If remainder after division is not zero, it indicates that the data unit has been damaged in
transit and therefore it is rejected.
• This technique is more powerful than the parity check and checksum error detection.
• CRC is based on binary division. A sequence of redundant bits called CRC or CRC
remainder is appended at the end of a data unit such as byte.
Requirements of CRC :
A CRC will be valid if and only if it satisfies the following requirements:
1. It should have exactly one less bit than divisor.
2. Appending the CRC to the end of the data unit should result in the bit sequence which is
exactly divisible by the divisor.
• The various steps followed in the CRC method are
1. A string of n as is appended to the data unit. The length of predetermined divisor is n+ 1.
2. The newly formed data unit i.e. original data + string of n as are divided by the divisor
using binary division and remainder is obtained. This remainder is called CRC.
3. Now, string of n Os appended to data unit is replaced by the CRC remainder (which is
also of n bit).
4. The data unit + CRC is then transmitted to receiver.
5. The receiver on receiving it divides data unit + CRC by the same divisor & checks the
remainder.
6. If the remainder of division is zero, receiver assumes that there is no error in data and it
accepts it.
7. If remainder is non-zero then there is an error in data and receiver rejects it.

• For example, if data to be transmitted is 1001 and predetermined divisor is 1011. The
procedure given below is used:
1. String of 3 zeroes is appended to 1011 as divisor is of 4 bits. Now newly formed data is
1011000.
1. Data unit 1011000 is divided by 1011.

2. During this process of division, whenever the leftmost bit of dividend or remainder is 0,
we use a string of Os of same length as divisor. Thus in this case divisor 1011 is replaced
by 0000.
3. At the receiver side, data received is 1001110.
4. This data is again divided by a divisor 1011.
5. The remainder obtained is 000; it means there is no error.
• CRC can detect all the burst errors that affect an odd number of bits.
• The probability of error detection and the types of detectable errors depends on the choice
of divisor.
• Thus two major requirement of CRC are:
(a) CRC should have exactly one bit less than divisor.
(b) Appending the CRC to the end of the data unit should result in the bit sequence which is
exactly divisible by the divisor.
Polynomial codes
• A pattern of Os and 1s can be represented as a polynomial with coefficient of o and 1.
• Here, the power of each term shows the position of the bit and the coefficient shows the
values of the bit.
• For example, if binary pattern is 100101, its corresponding polynomial representation is
x5 + x2 + 1. Figure shows the polynomial where all the terms with zero coefficient are
removed and x J is replaced by x and XO by 1.

• The benefits of using polynomial codes is that it produces short codes. For example here a
6-bit pattern is replaced by 3 terms.
• In polynomial codes, the degree is 1 less than the number of bits in the binary pattern.
The degree of polynomial is the highest power in polynomial. For example as shown in fig
degree of polynomial x5 +x2 + 1 are 5. The bit pattern in this case is 6.
• Addition of two polynomials is based on modulo-2 method. In such as case, addition and
subtraction is same.
• Addition or subtraction is .done by combining terms and deleting pairs of identical terms.
For example adding x5 + x4+ x2 and x6 + x4 + x2 give x6 + x5. The terms x4 and x2 are deleted.
• If three polynomials are to be added and if we get a same term three times, a pair of them
is detected and the third term is kept. For example, if there is x 2 three times then we keep
only one x2
• In case of multiplication of two polynomials, their powers are added. For example,
multiplying x5 + x3 + x2 + x with x2+ x+ 1 yields:
(X5 + x3 + x2 + x) (x2 + x + 1)
= x7 + x6+ x5+ x5+ x4+ x3+ x4+ x3+ x2+ x3+ x2+ x
=X7+x6+x3+X
In this, first polynomial is multiplied by all terms of second. The result is then simplified
and pairs of equal terms are deleted.
• Incase of division, the two polynomials are divided as per the rules of binary division, until
the degree of dividend is less than that of divisor.
CRC generator using polynomials
• If we consider the data unit 1001 and divisor or polynomial generator 1011their
polynomial representation is:

• Now string of n 0s (one less than that of divisor) is appended to data. Now data is
1001000 and its corresponding polynomial representation is x 6 + x3.
• The division of x6+x3 by x3+x+ 1 is shown in fig.
• The polynomial generator should have following properties:
1. It should have at least two terms.
2. The coefficient of the term x0 should be 1.
3. It should not be divisible by x.
4. It should be divisible by x+ 1.

Hamming Code
Hamming code is a block code that is capable of detecting up to two simultaneous bit errors
and correcting single-bit errors. It was developed by R.W. Hamming for error correction.
In this coding method, the source encodes the message by inserting redundant bits within
the message. These redundant bits are extra bits that are generated and inserted at specific
positions in the message itself to enable error detection and correction. When the
destination receives this message, it performs recalculations to detect errors and find the bit
position that has error.

Encoding a message by Hamming Code


The procedure used by the sender to encode the message encompasses the following steps −
Step 1 − Calculation of the number of redundant bits.
Step 2 − Positioning the redundant bits.
Step 3 − Calculating the values of each redundant bit.
Once the redundant bits are embedded within the message, this is sent to the user.
Step 1 − Calculation of the number of redundant bits.
If the message contains m�number of data bits, r�number of redundant bits are added to it
so that m� is able to indicate at least (m + r+ 1) different states. Here, (m + r) indicates
location of an error in each of (� + �) bit positions and one additional state indicates no
error. Since, r� bits can indicate 2r� states, 2r� must be at least equal to (m + r + 1). Thus
the following equation should hold 2r ≥ m+r+1

Step 2 − Positioning the redundant bits.


The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc. They are
referred in the rest of this text as r1 (at position 1), r2 (at position 2), r3 (at position 4), r4 (at
position 8) and so on.

Step 3 − Calculating the values of each redundant bit.


The redundant bits are parity bits. A parity bit is an extra bit that makes the number of 1s
either even or odd. The two types of parity are −
Even Parity − Here the total number of bits in the message is made even.
Odd Parity − Here the total number of bits in the message is made odd.
Each redundant bit, ri, is calculated as the parity, generally even parity, based upon its bit
position. It covers all bit positions whose binary representation includes a 1 in the
ith position except the position of ri. Thus −
 r1 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the least significant position excluding 1 (3, 5, 7, 9, 11 and so on)
 r2 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)
 r3 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 3 from right except 4 (5-7, 12-15, 20-23 and so on)

Decoding a message in Hamming Code


Once the receiver gets an incoming message, it performs recalculations to detect errors and
correct them. The steps for recalculation are −
Step 1 − Calculation of the number of redundant bits.
Step 2 − Positioning the redundant bits.
Step 3 − Parity checking.
Step 4 − Error detection and correction

Step 1 − Calculation of the number of redundant bits


Using the same formula as in encoding, the number of redundant bits are ascertained.
2r ≥ m + r + 1 where m is the number of data bits and r is the number of redundant bits.

Step 2 − Positioning the redundant bits


The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc.

Step 3 − Parity checking


Parity bits are calculated based upon the data bits and the redundant bits using the same
rule as during generation of c1,c2 ,c3 ,c4 etc. Thus
c1 = parity(1, 3, 5, 7, 9, 11 and so on)
c2 = parity(2, 3, 6, 7, 10, 11 and so on)
c3 = parity(4-7, 12-15, 20-23 and so on)

Step 4 − Error detection and correction


The decimal equivalent of the parity bits binary values is calculated. If it is 0, there is no
error. Otherwise, the decimal value gives the bit position which has error. For example, if
c1c2c3c4 = 1001, it implies that the data bit at position 9, decimal equivalent of 1001, has
error. The bit is flipped to get the correct message.

Example of Hamming Code Generation


Suppose a binary data 1001101 is to be transmitted. To implement hamming code for this,
following steps are used:
1. Calculating the number of redundancy bits required. Since number of data bits is 7, the
value of r is calculated as
2r > m + r + 1
24 > 7 + 4 + 1
Therefore no. of redundancy bits = 4
2. Determining the positions of various data bits and redundancy bits. The various r bits
are placed at the position that corresponds to the power of 2 i.e. 1, 2, 4, 8

4. Thus data 1 0 0 1 1 1 0 0 1 0 1 with be transmitted.

Error Detection & Correction


Considering a case of above discussed example, if bit number 7 has been changed from 1 to
0.The data will be erroneous.

Data sent: 1 0 0 1 1 1 0 0 1 0 1
Data received: 1 00 1 0 1 00 1 0 1 (seventh bit changed)

The data link layer has two main functions, flow control and error control.
Flow Control
In the data communication, flow control manages the transmitted data between sender and
receiver. First, data will be transmit to the receiver, after it is received by receiver, an
acknowledgment will be sent back to sender, and then the sender sends the next data. The
flow control is responsible for “adjust” the amount of the transmitted data. But the receiver
is limited, the incoming data is not allowed to overwhelm the receiver. The receiver needs
time to check and process the incoming data and store them before they can be used. But
this processing speed is
much slower than the data transmission speed. If incoming data almost fills up the receiver
buffer memory, the receiver will tell the sender to slow down the transmission speed or even
pause.
Error Control
Error control in data link layer means two things, error detection and error correction.
Sometimes, data lost or damaged during the transmission, error control will inform the
sender, specify the frames and ask retransmit. This process is called automatic repeat
request (ARQ). (In reality, the acknowledgment ACK sent back to sender also could be lost
or damaged, but we will not discuss this here. In the demonstration, the error rate of ACK
will be assumed to be zero.)

Noiseless Channels Protocol


Let us first assume we have an ideal channel in which no frames are lost, duplicated, or
corrupted.
1. Simplest Protocol

It is very simple. The sender sends a sequence of frames without even thinking about the
receiver. To send three frames, three events occur at the sender site and three events at the
receiver site. Note that the data frames are shown by tilted boxes; the height of the box
defines the transmission time difference between the first bit and the last bit in the frame.

2. Stop-and-Wait Protocol

In Stop-and-Wait Protocol the sender sends one frame, stops until it receives confirmation
from the receiver (okay to go ahead), and then sends the next frame.

In this figure we can see the traffic on the forward channel (from sender to receiver) and the
reverse channel. At any time, there is either one data frame on the forward channel or one
ACK frame on the reverse channel. We therefore need a half-duplex link.
Noisy Channels Protocol
1. Stop-and-Wait Automatic Repeat Request
2. Go-Back-N Automatic Repeat Request
3. Selective Repeat Automatic Repeat Request

Noisy Channels:
Stop-and-wait ARQ

Stop-and-wait ARQ is the most basic mechanism among three, and it is the foundation of
the other two. In the stop and wait ARQ, frame and ACK are numbered 0 and 1 alternately.
Frames 0 sends to receiver, ACK 1 will be sent back to sender; frame 1 goes to receiver,
ACK 0 will be back to sender, and so on.

Sending device sends a frame 0 to receiving device, sender keeps a copy of this frame 0. At
the same time, there is timer start to timing. When the frame 0 sends to receiver, if this
frame correctly received, a positive ACK 1 will be sends back to sender, to tell sender
excepting frame 1 transmission. During the transmission, if the frame 0 is lost, receiving
device does not receive anything, so it will do nothing. If a damaged frame 0 sends to
receiver, receiver will discard this frame 0 automatically and remain silent. But sending
device is still waiting the ACK 1 back, after the timer times up, sender assumes the
transmission of frame 0 is failed, so it will retransmit the frame 0 to receiver, which is the
reason of sender keeps copy of frame 0. There are control variables for sender and receiver,
called “S” and “R”. S holds the number of the recently sent frame. R holds the number of
excepted frame. The timer duration must be longer than total time of frame sends to
receiver and ACK sends back to sender; otherwise it will be an infinite loop.

Normal operation

As the figure 2 shows, from sender to receiver, frame 1 is lost, but sender is still expecting
the ACK 0 back. After timer time out, frame 1 sends again.
Stop-and-wait ARQ, lost frame

Stop-and-wait ARQ lost frame [1]

Lost or delayed ACK

Stop-and-wait ARQ, lost ACK frame

As the figure 3 shows, when ACK 0 sends back to sender, this frame has lost. So after time
out, frame 1 sends again. Receiver side is excepting for frame 0, so frame 1 is discarded.

Stop-and-wait ARQ lost ACK [1]

Stop-and-wait ARQ, delayed ACK

Frame 0 sends to receiver, ACK 1 sends back to Sender, but the transmission has delayed.
After time out, sender sends frame 0 again; receiver is excepting frame 1. Therefore, the
frame 0 is discarded, ACK 1 is discarded. Frame 1 sends again.
Stop-and-wait ARQ delayed ACK [1]

Go-Back-N ARQ

If the link quality is very good (assume there is no error frame), for Stop-and-wait ARQ,
each time only one frame can be transmitted. But in reality, the link is always noisy, frame
lost or damaged happens. The sender and receiver will have nothing to do except waiting.
Therefore the Stop-and-wait ARQ is very low efficiency. To improve to this, we can send
several frames insides only one. Therefore the Go-Back-N ARQ has been generated.
Sequence Numbers

Because each time, there are several frames waiting for transmission, so we must number
them sequentially. We set a limit for the frames. If the header of frame allows m bits for the
sequence number, those frames range will be 0 to 2 m-1. If m=2, sequence number will be
0,1,2,3,0,1,2,3… repeat in this way. Unlike Stop- and-wait ARQ’s 0,1,0,1…

Sender sliding window

Now, a “group” of frames send to receiver, we need something to hold this “group” until ACK
arrived. Next, the concept of “sliding window” is introduced. The window size is fixed which
is 2m-1. Inside this sliding window, there are the copies of the transmission frames. When
the correct ACK arrived, sliding window will slid forward.

Go-Back-N ARQ sender sliding window [1]


Receiver sliding window

Receiver sliding window in Go-Back-N ARQ is always 1. It’s waiting for the correct frame
comes in correct order, then sends back the ACK and slide forward. If the frame is lost or

damaged, receiver will wait for the resend. Even the rest of the frame is correct, receiver will
discard them automatically

Go-Back-N ARQ receiver sliding window [1]


Control Variables

In the Go-Back-N ARQ, sender’s control variables are S, S F, SL. But receiver’s variable is still
R. Slide window size is W. S is the sequence number of latest sent frame, S F is the sequence
number of the first frame in the slide window, S L is the sequence number of the last frame
in the slide window. R is the sequence number of the excepted frame. W=S L-SF+1=2m-1.
Only when R and sequence number of received frame are matched, frame accept, otherwise
discard it.

Go-Back-N ARQ control variable [1]

Timer
Inside of slide window, each sent frame has individual timer. The total timer number is
equal to the slide window size.
Lost or damaged frame

Receiver will send back an ACK to sender if the correct frame received (right frame in right
order). If the frame is lost or damaged, receiver will remain silence. If there is no ACK back
not back on time, sender will resend group of frames, from S to S L. The receiver is only
“loyalty” to the first incoming frame, no matter what are the conditions of the following
frame, even they are correct, “ignore” them still.
Go-Back-N ARQ, normal operation
Go-Back-N ARQ normal operation [1]

Frame 0 & 1 send, ACK 1 & 2 back to sender. Frame 2 send, ACK 3 send back. Then frame
3
send to receiver.

Go-Back N ARQ, lost frame

Go-Back-N ARQ lost frame [1]

Frame 0 & 1 send, ACK 1 & 2 back to sender. Frame 2 & 3 send, but frame 2 lost in the
transmission. When frame 3 received out of order, this frame 3 will be discarded by
receiver. After time out, frame 2 resent, then receiver send ACK 3 back and then frame 3
resent.
Go-Back-N ARQ, sender window size
Go-Back-N ARQ sender window size [1] According to the provision,

In figure 11 a, sender slide window size is less than 2 m. ACK 1,2,3 are lost. After frame 0
timeout, frame 0, 1, 2 are resent. But receiver is excepting frame 3 now, so when the first
resent data frame 0 arrived, there are mismatched. So is could discarded correctly.

In the figure 11 b, the sender slide window size is equal to 2 m. All ACK lost, after timeout,
the first frame 0 resend. At the same time, receiver slide window is point at the second
frame 0. So there are “matched”, but within different cycle, the data will be erroneously
accepted
Selective Repeat ARQ
In reality, links usually noisy. Lost or damaged frame occurred very often. This means lots
of frames will be discard by receiver automatically, even they are correct only with wrong
order. The sender has to send those “innocent” frames again and again. Bandwidth has
been used up, transmission speed slow down. All of th, made Go-Back-N ARQ is still not
very high efficiency. Therefore the advanced version protocol came out. Not need to resend
N frames. Instead, only the lost or damaged frame resend, this mechanism is called
Selective Repeat ARQ.
Sender slide window
The control variables in Selective Repeat ARQ are same as in Go-Back-N ARQ: S F, SL and S.
But the slide window size changed into 2m-1.
Receiver slide window

Receiver has 2 control variables, RF and RL. The slide window size also changed into 2m-1.
figSelective Repeat ARQ receiver slide window

Now, the sender slide window size is same as receiver’s. In the Go-Back-N ARQ, each time
receiver search one specific sequence number; in Selective Repeat
ARQ, each time receiver look for a range specific sequence number, from R F to RL.
NAK
NAK means negative acknowledgment, it does only exist in Selective Repeat ARQ.
Selective Repeat ARQ, lost frame

Selective Repeat ARQ lost frame [1]

Selective Repeat ARQ, sender window size

Selective Repeat ARQ sender window size [1


Damaged or lost ACK NAK could be happen in the reality. But in my courseware design,
assume they are always perfect received by sender. In the more complex situation, both
sides of the transmission can be sender and receiver, this is so called bidirectional
transmission, and this will not be discuss in this thesis.

HDLC

High-level Data Link Control (HDLC) is a bit-oriented protocol for communication over
point-to-point and multipoint links.
Configurations and Transfer Modes

HDLC provides two common transfer modes that can be used in different configurations:
normal response mode (NRM) and asynchronous balanced mode (ABM).

Normal Response Mode

In normal response mode (NRM), the station configuration is unbalanced. We have one
primary station and multiple secondary stations. A primary station can send commands; a
secondary station can only respond. The NRM is used for both point-to-point and multiple-
point links.

Asynchronous Balanced Mode


In asynchronous balanced mode (ABM), the configuration is balanced. The link is
point-to-point, and each station can function as a primary and a secondary (acting as
peers).

84
Frames

To provide the flexibility necessary to support all the options possible in the modes and
configurations just described, HDLC defines three types of frames: information frames (I-
frames), supervisory frames (S-frames), and unnumbered frames (U-frames).
Each type of frame serves as an envelope for the transmission of a different type of message.

I-frames are used to transport user data and control information relating to user data
(piggybacking).
S-frames are used only to transport control information.

U-frames are reserved for system management. Information carried by V-frames is


intended for managing the link itself.
Frame Format

Each frame in HDLC may contain up to six fields: a beginning flag field, an address field, a
control field, an information field, a frame check sequence (FCS) field, and an ending flag
field. In multiple-frame transmissions, the ending flag of one frame can serve as the
beginning flag of the next frame.

Fields
Let us now discuss the fields and their use in different frame types:

Flag field: The flag field of an HDLC frame is an 8-bit sequence with the bit pattern
01111110 that identifies both the beginning and the end of a frame and serves as a
synchronization pattern for the receiver.

Address field: The second field of an HDLC frame contains the address of the secondary
station. If a primary station created the frame, it contains a to address. If a secondary
creates the frame, it contains a from address. An address field can be 1 byte or several
bytes long, depending on the needs of the network. One byte can identify up to 128
stations.

Control field: The control field is a 1- or 2-byte segment of the frame used for flow and
error control. The interpretation of bits in this field depends on the frame type. We discuss
this field later and describe its format for each frame type.

Information field: The information field contains the user's data from the network layer
or management information. Its length can vary from one network to another.

FCS field: The frame check sequence (FCS) is the HDLC error detection field. It can
contain either a 2- or 4-byte ITU-T CRC.

PPP
Although HDLC is a general protocol that can be used for both point-to-point and
multipoint configurations, one of the most common protocols for point-to-point
access is the Point-to-Point
Protocol (PPP). Today, millions of Internet users who need to connect their
home computers to the server of an Internet service provider use PPP.

PPP provides several services:


1. PPP defines the format of the frame to be exchanged between devices.
2. PPP defines how two devices can negotiate the establishment of the link and the
exchange of data.
3. PPP defines how network layer data are encapsulated in the data link frame.
4. PPP defines how two devices can authenticate each other.
5. PPP provides multiple network layer services supporting a variety of network layer
protocols.
6. PPP provides connections over multiple links.

7. PPP provides network address configuration. This is particularly useful when a home
user needs a temporary network address to connect to the Internet.

On the other hand, to keep PPP simple, several services are missing:
I. PPP does not provide flow control. A sender can send several frames one
after another with no concern about overwhelming the receiver.

2. PPP has a very simple mechanism for error control. A CRC field is used to detect errors.
If the frame is corrupted, it is silently discarded; the upper-layer protocol needs to take care
of the problem. Lack of error control and sequence numbering may cause a packet to be
received out of order.
3. PPP does not provide a sophisticated addressing mechanism to handle frames in a
multipoint configuration.

Frame Format of PPP

Flag: A PPP frame starts and ends with a I-byte flag with the bit pattern 01111110.
Although this pattern is the same as that used in HDLC, there is a big difference. PPP is a
byte-oriented protocol; HDLC is a bit-oriented protocol.
Address: The address field in this protocol is a constant value and set to 11111111
(broadcast address).
During negotiation (discussed later), the two parties may agree to omit this byte.

Control: This field is set to the constant value 11000000 (imitating unnumbered frames in
HDLC). PPP does not provide any flow control. Error control is also limited to error
detection. This means that this field is not needed at all, and again, the two parties can
agree, during negotiation, to omit this byte.

Protocol: The protocol field defines what is being carried in the data field: either user data
or other information. We discuss this field in detail shortly. This field is by default 2 bytes
long, but the two parties can agree to use only I byte.

Payload field: This field carries either the user data or other information that we will
discuss shortly. The data field is a sequence of bytes with the default of a maximum of
1500 bytes; but this can be changed during negotiation. The data field is byte stuffed if the
flag byte pattern appears in this field. Because there is no field defining the size of the data
field, padding is needed if the size is less than the maximum default value or the
maximum negotiated value.
FCS: The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.

Transition Phases
A PPP connection goes through phases which can be shown in a transition phase diagram.

Open
Dead: In the dead phase the link is not being used. There is no active carrier (at the
physical layer) and the line is quiet.

Establish: When one of the nodes starts the communication, the connection goes into this
phase. In this phase, options are negotiated between the two parties. If the negotiation is
successful, the system goes to the authentication phase (if authentication is required) or
directly to the networking phase. The link control protocol packets, discussed shortly, are
used for this purpose. Several packets may be exchanged here.

Authenticate: The authentication phase is optional; the two nodes may decide, during the
establishment phase, not to skip this phase. However, if they decide to proceed with
authentication, they send several authentication packets, discussed later. If the result is
successful, the connection goes to the networking phase; otherwise, it goes to the
termination phase.

Network: In the network phase, negotiation for the network layer protocols takes place.
PPP specifies that two nodes establish a network layer agreement before data at the
network layer can be exchanged. The reason is that PPP supports multiple protocols at the
network layer. If a node is running multiple protocols simultaneously at the network layer,

the receiving node needs to know which protocol will receive the data.
Open: In the open phase, data transfer takes place. When a connection reaches this phase,
the exchange of data packets can be started. The connection remains in this phase until
one of the endpoints wants to terminate the connection.

Terminate: In the termination phase the connection is terminated. Several packets are
exchanged between the two ends for house cleaning and closing the link.

1
.

Random Access:

In random access or contention methods, no station is superior to another station and none
is assigned the control over another. No station permits, or does not permit, another station
to send. At each instance, a station that has data to send uses a procedure defined by the
protocol to make a decision on whether or not to send. This decision depends on the state of
the medium (idle or busy). In other words, each station can transmit when it desires on the
condition that it follows the predefined procedure, including the testing of the state of the
medium.

Two features give this method its name.

First, there is no scheduled time for a station to transmit. Transmission is random among
the stations. That is why these methods are called random access.

Second, no rules specify which station should send next. Stations compete with one another
to access the medium. That is why these methods are also called contention methods.

ALOHA

It was designed for a radio (wireless) LAN, but it can be used on any shared medium.

It is obvious that there are potential collisions in this arrangement. The medium is shared
between the stations. When a station sends data, another station may attempt to do so at the
same time. The data from the two stations collide and become garbled.

There are two different versions of ALOHA

2
1. Pure ALOHA

The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol.
The idea is that each station sends a frame whenever it has a frame to send. However, since
there is only one channel to share, there is the possibility of collision between frames from
different stations.
• In pure ALOHA, whenever any station transmits a frame, it expects the acknowledgement
from the receiver.
• If acknowledgement is not received within specified time, the station assumes that the
frame (or acknowledgement) has been destroyed.
• If the frame is destroyed because of collision the station waits for a random amount of time
and sends it again. This waiting time must be random otherwise same frames will collide
again and again.
• Therefore pure ALOHA dictates that when time-out period passes, each station must wait
for a random amount of time before resending its frame. This randomness will help avoid
more collisions.
• Figure shows an example of frame collisions in pure ALOHA.

• In fig there are four stations that .contended with one another for access to shared channel.
All these stations are transmitting frames. Some of these frames collide because multiple

3
frames are in contention for the shared channel. Only two frames, frame 1.1 and frame 2.2
survive. All other frames are destroyed.
• Whenever two frames try to occupy the channel at the same time, there will be a collision
and both will be damaged. If first bit of a new frame overlaps with just the last bit of a frame
almost finished, both frames will be totally destroyed and both will have to be retransmitted.
Throughput

Let us call G the average number of frames generated by the system during one frame
transmission time. Then it can be proved that the average number of successful
transmissions for pure ALOHA is S = G x e -2G. The maximum throughput Smax is 0.184, for
G = 1. Therefore, if a station generates only one frame in this vulnerable time (and no other
stations generate a frame during this time), the frame will reach its destination successfully.

Vulnerable time:

Let us find the length of time, the vulnerable time, in which there is a possibility of collision.
We assume that the stations send fixed-length frames with each frame taking T fr S to send.

Station A sends a frame at time t. Now imagine station B has already sent a frame between t
- Tfr and t. This leads to a collision between the frames from station A and station B. The end
of B's frame collides with the beginning of A's frame. On the other hand, suppose that station
C sends a frame between t and t + Tfr. Here, there is a collision between frames from station A
and station C. The beginning of C's frame collides with the end of A's frame.

During which a collision may occur in pure ALOHA, is 2 times the frame transmission time.
Pure ALOHA vulnerable time = 2 x Tfr

Figure 2.33 Vulnerable time for pure ALOHA Protocol

4
2.Slotted ALOHA
• Slotted ALOHA was invented to improve the efficiency of pure ALOHA as chances of
collision in pure ALOHA are very high.
• In slotted ALOHA, the time of the shared channel is divided into discrete intervals called
slots.
• The stations can send a frame only at the beginning of the slot and only one frame is sent
in each slot.

• In slotted ALOHA, if any station is not able to place the frame onto the channel at the
beginning of the slot i.e. it misses the time slot then the station has to wait until the
beginning of the next time slot.
• In slotted ALOHA, there is still a possibility of collision if two stations try to send at the
beginning of the same time slot as shown in fig.
• Slotted ALOHA still has an edge over pure ALOHA as chances of collision are reduced to
one-half.
Throughput:

It can be proved that the average number of successful transmissions for slotted ALOHA is S
= G x e-G. The maximum throughput Smax is 0.368, when G = 1. Therefore, if a station
generates only one frame in this vulnerable time (and no other station generates a frame
during this time), the frame will reach its destination successfully.

5
Protocol Flow Chart for ALOHA

Explanation:
• A station which has a frame ready will send it.
• Then it waits for some time.
• If it receives the acknowledgement then the transmission is successful.
• Otherwise the station uses a backoff strategy, and sends the packet again.
• After many times if there is no acknowledgement then the station aborts the idea of
transmission

Comparison of Pure Aloha and Slotted Aloha shown on Throughput vs. Traffic Load plot.

CSMA protocol
CSMA protocol was developed to overcome the problem found in ALOHA i.e. to minimize the
chances of collision, so as to improve the performance. CSMA protocol is based on the
principle of 'carrier sense'. The station senses the carrier or channel before transmitting a
frame. It means the station checks the state of channel, whether it is idle or busy.
Even though devices attempt to sense whether the network is in use, there is a good chance
that two stations will attempt to access it at the same time. On large networks, the
transmission time between one end of the cable and another is enough that one station may
access the cable even though another has already just accessed it.
The chances of collision still exist because of propagation delay. The frame transmitted by
one station takes some time to reach other stations. In the meantime, other stations may
sense the channel to be idle and transmit their frames. This results in the collision.

6
The possibility of collision still exists because of propagation delay; when a station sends a
frame, it still takes time (although very short) for the first bit to reach every station and for
every station to sense it. In other words, a station may sense the medium and find it idle,
only because the first bit sent by another station has not yet been received.

Vulnerable Time

The vulnerable time for CSMA is the propagation time T p. This is the time needed for a signal
to propagate from one end of the medium to the other. When a station sends a frame, and
any other station tries to send a frame during this time, a collision will result.

But if the first bit of the frame reaches the end of the medium, every station will already have
heard the bit and will refrain from sending. The leftmost station A sends a frame at time t l'
which reaches the rightmost station D at time t l + Tp. The gray area shows the vulnerable
area in time and space.

here Are Three Different Type of CSMA methods


(I) I-persistent CSMA
(ii) Non- Persistent CSMA
(iii) p-persistent CSMA

(i) I-persistent CSMA


• In this method, station that wants to transmit data continuously senses the channel to
check whether the channel is idle or busy.
• If the channel is busy, the station waits until it becomes idle.

7
• When the station detects an idle-channel, it immediately transmits the frame with
probability 1. Hence it is called I-persistent CSMA.
• This method has the highest chance of collision because two or more stations may find
channel to be idle at the same time and transmit their frames.
• When the collision occurs, the stations wait a random amount of time and start allover
again.
Drawback of I-persistent
• The propagation delay time greatly affects this protocol. Let us suppose, just after the
station I begins its transmission, station 2 also became ready to send its data and senses the
channel. If the station I signal has not yet reached station 2, station 2 will sense the channel
to be idle and will begin its transmission. This will result in collision.

Even if propagation delay time is zero, collision will still occur. If two stations became .ready
in the middle of third station's transmission, both stations will wait until the transmission of
first station ends and then both will begin their transmission exactly simultaneously. This
will also result in collision.
(ii) Non-persistent CSMA
• In this scheme, if a station wants to transmit a frame and it finds that the channel is
busy (some other station is transmitting) then it will wait for fixed interval oftime.
• After this time, it again checks the status of the channel and if the channel is.free it
will transmit.
• A station that has a frame to send senses the channel.
• If the channel is idle, it sends immediately.
• If the channel is busy, it waits a random amount of time and then senses the channel
again.
• In non-persistent CSMA the station does not continuously sense the channel for the
purpose of capturing it when it detects the end of previous transmission.

8
(iii) p-persistent CSMA
• This method is used when channel has time slots such that the time slot duration is equal
to or greater than the maximum propagation delay time.
• Whenever a station becomes ready to send, it senses the channel.
• If channel is busy, station waits until next slot.
• If channel is idle, it transmits with a probability p.
• With the probability q=l-p, the station then waits for the beginning of the next time slot.
• If the next slot is also idle, it either transmits or waits again with probabilities p and q.
• This process is repeated till either frame has been transmitted or another station has begun
transmitting.
• In case of the transmission by another station, the station acts as though a collision has
occurred and it waits a random amount of time and starts again.

9
CSMA/CD :
Carrier Sense Multiple Access / Collision Detection, a set of rules determining how network devices
respond when two devices attempt to use a data channel simultaneously (called a collision). Standard
Ethernet networks use CSMA/CD to physically monitor the traffic on the line at participating stations .
 When a frame is ready, the transmitting station checks whether the channel is
idle or busy.
 If the channel is busy, the station waits until the channel becomes idle.
 If the channel is idle, the station starts transmitting and continually monitors the
channel to detect collision.
 If a collision is detected, the station starts the collision resolution algorithm.
 The station resets the retransmission counters and completes frame transmission.
The algorithm of Collision Resolution is:
 The station continues transmission of the current frame for a specified time along
with a jam signal, to ensure that all the other stations detect collision.
 The station increments the retransmission counter.
 If the maximum number of retransmission attempts is reached, then the station
aborts transmission.
 Otherwise, the station waits for a backoff period which is generally a function of
the number of collisions and restart main algorithm.
The following flowchart summarizes the algorithms:

Explanation:
• The station that has a ready frame sets the back off parameter to zero.
• Then it senses the line using one of the persistent strategies.
• If then sends the frame. If there is no collision for a period corresponding to one complete
frame, then the transmission is successful.
• Otherwise the station sends the jam signal to inform the other stations about the collision.

10
• The station then increments the back off time and waits for a random back off time and
sends the frame again.
• If the back off has reached its limit then the station aborts the transmission.
• CSMA/CD is used for the traditional Ethernet.
• CSMA/CD is an important protocol. IEEE 802.3 (Ethernet) is an example of CSMNCD. It is
an international standard.

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)

In a wired network, the received signal has almost the same energy as the sent signal
because either the length of the cable is short or there are repeaters that amplify the energy
between the sender and the receiver. This means that in a collision, the detected energy
almost doubles. However, in a wireless network, much of the sent energy is lost in
transmission. The received signal has very little energy. Therefore, a collision may add only 5
to 10 percent additional energy. This is not useful for effective collision detection. We need to
avoid collisions on wireless networks because they cannot be detected. Carrier sense multiple
access with collision avoidance (CSMA/CA) was invented for this network. Collisions are
avoided through the use of CSMA/CA's three strategies: the interframe space, the contention
window, and acknowledgments.
(i) Interframe space
(ii) Contention window
(iii) Acknowledgements

Interframe Space (IFS)

First, collisions are avoided by deferring transmission even if the channel is found idle.
When an idle channel is found, the station does not send immediately. It waits for a period of
time called the interframe space or IFS. Even though the channel may appear idle when it is
sensed, a distant station may have already started transmitting. The distant station's signal
has not yet reached this station.

Contention Window

The contention window is an amount of time divided into slots. A station that is ready to
send chooses a random number of slots as its wait time. The number of slots in the window
changes according to the binary exponential back-off strategy. This means that it is set to
one slot the first time and then doubles each time the station cannot detect an idle channel
after the IFS time.

11
Acknowledgment

With all these precautions, there still may be a collision resulting in destroyed data. In
addition, the data may be corrupted during the transmission. The positive acknowledgment
and the time-out timer can help guarantee that the receiver has received the frame.

CSMA/CA Procedure:
Fig. Shows the flow chart explaining the principle of CSMA/CA.

• This is the CSMA protocol with collision avoidance.


• The station ready to transmit, senses the line by using one of the persistent strategies.
• As soon as it find the line to be idle, the station waits for an IFG (Interframe gap) amount of
time.
• If then waits for some random time and sends the frame.
• After sending the frame, it sets a timer and waits for the acknowledgement from the
receiver.
• If the acknowledgement is received before expiry of the timer, then the transmission is
successful.
• But if the transmitting station does not receive the expected acknowledgement before the
timer expiry then it increments the back off parameter, waits for the back off time and
resenses the line.

12
Controlled Access
In controlled access, the stations consult one another to find which station has the right to
send.
In controlled access, the stations consult one another to find which station has the right to
send. A station cannot send unless it has been authorized by other stations. We discuss
three popular controlled-access methods.

1. Reservation

In the reservation method, a station needs to make a reservation before sending data. Time is
divided into intervals. In each interval, a reservation frame precedes the data frames sent in
that interval.

If there are N stations in the system, there are exactly N reservation minislots in the
reservation frame. Each minislot belongs to a station. When a station needs to send a data
frame, it makes a reservation in its own minislot. The stations that have made reservations
can send their data frames after the reservation frame.

In the first interval, only stations 1, 3, and 4 have made reservations. In the second interval,
only station 1 has made a reservation.

2. Polling

Polling works with topologies in which one device is designated as a primary station and the
other devices are secondary stations. All data exchanges must be made through the primary
device even when the ultimate destination is a secondary device. The primary device controls
the link; the secondary devices follow its instructions. It is up to the primary device to
determine which device is allowed to use the channel at a given time. The primary device,
therefore, is always the initiator of a session.

13
Polling works with topologies having Primary Station and Secondary Stations.

 The Primary Station Controls the link whereas the secondary station
follows its instructions.
 The exchange of data occurs only and only through the primary device
even when the final destination of transmission is secondary.
 Whenever primary Station has something to send, it sends the message
to each node.
 Before sending the data, it creates and transmits a Select (SEL) Frame.
One field of SEL Frame includes the address of the intended secondary station.
 While sending, the primary station should know whether the target
device is ready to receive or not.
 Hence, it alerts the secondary station for upcoming transmission and
wait for an acknowledgement (ACK) of secondary station’s status.
 Poll Function: When the primary is ready to receive data, it must ask
(poll) each device if it has anything to send.
 If the secondary has data to transmit, it sends the data frame. Else, it
sends a negative acknowledgement (NAK) .
 The primary station then polls the next secondary. When the response
is positive (a data frame), the primary station reads the frame and returns
an acknowledgement (ACK).
 There are two possibilities to terminate the transmission –
(a) either the secondary sends all data, finishing with an EOT frame.
(b) or, the primary station shows the timer up signal.

3. Token Passing

In the token-passing method, the stations in a network are organized in a logical ring. In
other words, for each station, there is a predecessor and a successor. The predecessor is the
station which is logically before the station in the ring; the successor is the station which is
after the station in the ring. The current station is the one that is accessing the channel now.
The right to this access has been passed from the predecessor to the current station. The
right will be passed to the successor when the current station has no more data to send.

14
Flow chart diageam for token passing

CHANNELIZATION
 Channelization is a multiple-access method in which the available bandwidth of a
link is shared in time, frequency, or through code, between different stations.
 Used for wireless communications.
 Methods:
Frequency division Multiple Access
Time division Multiple Access
Code division Multiple Access.

The Frequency-Division Multiple Access (FDMA):

In frequency-division multiple access (FDMA), the available bandwidth is divided into


frequency bands. Each station is allocated a band to send its data. In other words, each band
is reserved for a specific station, and it belongs to the station all the time. Each station also
uses a bandpass filter to confine the transmitter frequencies. To prevent station
interferences, the allocated bands are separated from one another by small guard bands. The
following figure shows the idea of FDMA.

15
Time-Division Multiple Access (TDMA):
In time-division multiple access (TDMA), the stations share the bandwidth of the channel in
time. Each station is allocated a time slot during which it can send data. Each station
transmits its data in is assigned time slot. The following figure shows the idea behind TDMA.

The main problem with TDMA lies in achieving synchronization between the different
stations. Each station needs to know the beginning of its slot and the location of its slot. This
may be difficult because of propagation delays introduced in the system if the stations are
spread over a large area. To compensate for the delays, we can insert guard times.
Synchronization is normally accomplished by having some synchronization bits at the
beginning of each slot.

Code-Division Multiple Access (CDMA):


CDMA simply means communication with different codes. CDMA differs from FDMA because
only one channel occupies the entire bandwidth of the link. It differs from TDMA because all
stations can send data simultaneously; there is no timesharing.

Implementation:

Let us assume we have four stations 1, 2, 3, and 4 connected to the same channel. The data
from station 1 are d1 , from station 2 are d2, and so on. The code assigned to the first station
is c1, to the second is c2, and so on. We assume that the assigned codes have two properties.
1. If we multiply each code by another, we get 0.
2. If we multiply each code by itself, we get 4 (the number of stations).
With these two properties in mind, how the above four stations can send data using the
same common channel, as shown in the following figure.

16
Station 1 multiplies (a special kind of multiplication, as we will see) its data by its code to get
d1.c1. Station 2 multiplies its data by its code to get d2.c2. And so on. The data that go on
the channel are the sum of all these terms, as shown in the box.

Any station that wants to receive data from one of the other three multiplies the data on the
channel by the code of the sender. For example, suppose stations 1 and 2 are talking to each
other. Station 2 wants to hear what station 1 is saying. It multiplies the data on the channel
by c1 the code of station1.
Because (c1.c1) is 4, but (c2 . c1), (c3. c1), and (c4 .c1) are all 0s, station 2 divides the result
by 4 to get the data from station1.

17

Você também pode gostar