Escolar Documentos
Profissional Documentos
Cultura Documentos
Course Outline
Background (analog telephony, TDM, PDH)
SONET/SDH history and motivation
Architecture (path, line, section)
Rates and frame structure
Payloads and mappings
Protection and rings
VCAT and LCAS
Handling packet data
Background
Telephony Multiplexing
1900: 25% of telephony revenues went to copper mines
channels
4 kHz
8 kHz
12 kHz
16 kHz
20 kHz
Digital communications
is always better than
Analog communications
and the PSTN became digital
Better means
More efficient use of resources (e.g. more channels on trunks)
Higher voice quality (less noise, less distortion)
Added features
After the invention of the transistor, in 1963 T-carrier system (TDM)
t
Y(J)S SONET Slide 6
1
2
3
4
5
6
7
processor
2
local loop
subscriber line
PSTN Network
class 5 switch
TDM timing
Time Domain Multiplexing relies on all channels (timeslots)
having precisely the same timing (frequency and phase)
In order to enforce this
the TDM device itself frequently performs the digitization
digital
analog
signals
signals
Numerical example:
component
signals
TDM
The fix
We must ensure that all the clocks have the same frequency
Every telephony network has an accurate clock called
a stratum 1 or Primary Reference Clock
All other clocks are directly or indirectly locked to it (master slave)
A TDM receiving device can lock onto the source clock
based on the incoming data (FLL, PLL)
For this to work, we must ensure that the data has enough transitions
(special line coding, scrambling bits, etc.)
1
0
transitions
no transitions
Y(J)S SONET Slide 13
Comparing clocks
A clock is said to be isochronous (isos=equal, chronos=time)
if its ticks are equally spaced in time
2 clocks are said to be synchronous (syn=same chronos=time)
if they tick in time, i.e. have precisely the same frequency
2 clocks are said to be plesiochronous (plesio=near chronos=time)
if they are nominally if the same frequency
but are not locked
PDH principle
If we want yet higher rates, we can mux together TDM signals (tributaries)
We could demux the TDM timeslots and directly remux them
but that is too complex
The TDM inputs are already digital, so we must
insist that the mux provide clock to all tributaries
(not always possible, may already be locked to a network)
OR
somehow transport tributary with its own clock
across a higher speed network with a different clock
(without spoiling remote clock recovery)
PDH hierarchies
level
64 kbps
*
E1 2.048 Mbps
*
2
3
E3
E4 139.264 Mbps
CEPT
T2 6.312 Mbps
34.368 Mbps
*
* 24
T1 1.544 Mbps
E2 8.448 Mbps
*
30
T3
44.736 Mbps
*
T4 274.176 Mbps
N.A.
24
J1 1.544 Mbps
*
J2 6.312 Mbps
*
J3 32.064 Mbps
*
J4 97.728 Mbps
Japan
Y(J)S SONET Slide 16
PDH overhead
digital
signal
data rate
voice
(Mbps)
channels
overhead
percentage
T1
1.544
24
0.52 %
T2
6.312
96
2.66 %
T3
44.736
672
3.86 %
T4
274.176
4032
5.88 %
E1
2.048
30
6.25 %
E2
8.448
120
9.09 %
E3
34.368
480
10.61 %
E4
139.264
1920
11.76 %
OAM
analog channels and 64 kbps digital channels
do not have mechanisms to check signal validity and quality
thus
major faults could go undetected for long periods of time
hard to characterize and localize faults when reported
minor defects might be unnoticed indefinitely
Solution is to add mechanisms based on overhead
as PDH networks evolved, more and more overhead was dedicated to
Operations, Administration and Maintenance (OAM) functions
including:
monitoring for valid signal
defect reporting
alarm indication/inhibition (AIS)
PDH Justification
In addition to FAS, PDH overhead includes
justification control (C-bits) and justification opportunity stuffing (R-bits)
Assume the tributary bitrate is B T
Positive justification
payload is expected at highest bitrate B+T
if the tributary rate is actually at the maximum bitrate
then all payload and R bits are filled
if the tributary rate is lower than the maximum
then sometimes there are not enough incoming bits
so the R-bits are not filled and C-bits indicate this
Negative justification
payload is expected at lowest bitrate B-T
if the tributary rate is actually the minimum bitrate
then payload space suffices
if the tributary rate is higher than the minimum
then sometimes there are not enough positions to accommodate
so R-bits in the overhead are used and the C-bits indicate this
Positive/Negative justification
payload is expected at nominal bitrate B
positive or negative justification is applied as required
Y(J)S SONET Slide 20
SONET/SDH
motivation and history
First step
With the disvestiture of the US Bell system a new need arose
MCI and NYNEX couldnt directly interconnect optical trunks
Interexchange Carrier Compatibility Forum requested T1 to solve problem
Needed multivendor/ multioperator fiber-optic communications standard
Three main tasks:
Optical interfaces (wavelengths, power levels, etc)
proposal submitted to T1X1 (Aug 1984)
T1.106 standard on single mode optical interfaces (1988)
Operations (OAM) system
proposal submitted to T1M1
T1.119 standard
Rates, formats, definition of network elements
Bellcore (Yau-Chau Ching and Rodney Boehm) proposal (Feb 1985)
proposed to T1X1
term SONET was coined
T1.105 standard (1988)
PDH limitations
Rate limitations
Copper interfaces defined
Need to mux/demux hierarchy of levels (hard to pull out a single timeslot)
Overhead percentage increases with rate
At least three different systems (Europe, NA, Japan)
E 2.048, 8.448, 34.348, 139.264
T 1.544, 3.152, 6.312, 44.736, 91.053, 274.176
J 1.544, 3.152, 6.312, 32.064, 97.728, 397.2
So a completely new mechanism was needed
Standardization !
The original Bellcore proposal:
hierarchy of signals, all multiple of basic rate (50.688)
basic rate about 50 Mbps to carry DS3 payload
bit-oriented mux
mechanisms to carry DS1, DS2, DS3
Many other proposals were merged into 1987 draft document (rate 49.920)
In summer of 1986 CCITT express interest in cooperation
needed a rate of about 150 Mbps to carry E4
wanted byte oriented mux
Initial compromise attempt
byte mux
US wanted 13 rows * 180 columns
CEPT wanted 9 rows * 270 columns
Compromise!
US would use basic rate of 51.84 Mbps, 9 rows * 90 columns
CEPT would use three times that rate - 155.52 Mbps, 9 rows * 270 columns
Y(J)S SONET Slide 25
SONET/SDH
architecture
Layers
SONET was designed with definite layering concepts
Physical layer optical fiber (linear or ring)
when exceed fiber reach regenerators
regenerators are not mere amplifiers,
regenerators use their own overhead
fiber between regenerators called section (regenerator section)
Line layer link between SONET muxes (Add/Drop Multiplexers)
input and output at this level are Virtual Tributaries (VCs)
actually 2 layers
lower order VC (for low bitrate payloads)
PDH
ATM
packet data
Y(J)S SONET Slide 27
SONET architecture
ADM
regenerator
ADM
Path
Line
Section
Line
Path
Termination
Termination
Termination
Termination
Termination
path
line
section
line
section
line
section
section
multiplex section
regenerator section
Optical
rate
STS-1
OC-1
51.84M
STS-3
OC-3
155.52M
*3
STS-12
OC-12
622.080M
*4
STS-48
OC-48
2488.32M
*4
STS-192
OC-192
9953.28M
*4
Y(J)S SONET Slide 29
rates
and
frame structure
9 rows
framing
9 rows
270 columns
SONET/SDH rates
SONET
SDH
STS-1
columns
rate
90
51.84M
STS-3
STM-1
270
155.52M
STS-12
STM-4
1080
622.080M
STS-48
STM-16
4320
2488.32M
STS-192
STM-64
17280
9953.28M
SONET/SDH tributaries
SONET
SDH
STS-1
T1
T3
E1
E3
28
21
E4
STS-3
STM-1
84
63
STS-12
STM-4
336
12
252
12
STS-48
STM-16
1344
48
1008
48
16
STS-192
STM-64
5376
192
4032
192 64
9 rows
6 rows
3 rows
90 columns
Transport
Overhead
TOH
MSOH
Section
Overhead
SOH
9*N
columns
9 rows
270*N columns
Byte-interleaving
...
Scrambling
Xn
Yn = Xn + Yn-43
Z-43
Y(J)S SONET Slide 40
STS-1 Overhead
section
overhead
line
overhead
A1
A2
J0
B1
E1
F1
D1
D2
D3
H1
H2
H3
B2
K1
K2
D4
D5
D6
D7
D8
D9
M0
E2
STM-1 Overhead
RSOH
A1
A1
A1
A2
A2
B1
E1
D1
D2
A2
J0
res
res
F1
res
res
D3
m
media
dependent
(defined for
SONET radio)
AU pointers
B2
MSOH
B2
B2
K1
K2
D4
D5
D6
D7
D8
D9
D10
D11
D12
S1
M1
res
reserved for
national use
E2
SOH
Y(J)S SONET Slide 42
C 1 C 2 C3 C4 C5 C6 C 7
4 MSBs are New Data Flag, 10 LSBs are actual offset value (0 782)
When offset=522 the STS-1 SPE is in a single STS-1 frame
In all other cases the SPE straddles two frames
When offset is a multiple of 87, the SPE is rectangular
SONET Justification
If tributary rate is above nominal, negative justification is needed
When less than 8 more bits than expected in buffer
NDF is 0110
offset unchanged
When 8 extra bits accumulate
NDF is set to 1001
H1 H2 extra
extra byte placed into H3
offset is decremented by 1 (byte)
offset unchanged
When 8 missing bits
NDF is set to 1001
H1 H2 H3 stuff
byte after H3 is stuffing
offset is incremented by 1 (byte)
Payloads
and
Mappings
We saw that the pointer the line overhead points to the STS path overhead POH
(after re-arranging) POH is one column of 9 rows (9 bytes = 576 kbps)
STS-1 HOP
1
30
59
87
POH
Payload type
J1 path trace
enables receiver to be sure
that the path connection is still OK
00
unequipped
01
nonspecific
02
LOP (TUG)
04
E3/T3
12
E4
13
ATM
16
18
LAPS X.85
1A
10G Ethernet
1B
GFP
CF
PoS - RFC1619
(without scrambling)
of previous payload
C2 path signal label
identifies the payload type
(examples in table)
LOP
1
30
59
87
7 VTGs
1 2 3 4 5 6 7
LOP
HOP
VC
column
rate
payload
VT 1.5
VC-11
1.728 DS1
(1.544)
4 per group
VT 2
VC-12
2.304 E1
(2.048)
3 per group
2 per group
6.912 DS2
1 per group
VT 3
VT 6
VC-2
12
STS-1
VC-3
48.384 E3
(34.368)
STS-1
VC-3
48.384 DS3
(44.736)
STS-3c
VC-4
149.760 E4
(6.312)
(139.264)
LO Path overhead
LOP OH is responsible for timing, PM, REI,
LO Path APS signaling is 4 MSBs of byte K4
H4=XXXXXX00
H4=XXXXXX01
500 sec
V1 pointer
125 sec
VC11 25B
VC12 34B
V2 pointer
H4=XXXXXX10
V3 pointer
H4=XXXXXX11
V4 pointer
V5
J2
N2
VC11 27B
VC12 36B
K4
Payload capacity
VT1.5/VC-11 has 3 columns = 27 bytes = 1.728 Mbps
but 2 bytes are used for overhead (V1/V2/V3/V4 and V5/J2/N2/K4)
so actually only 25 bytes = 1.6 Mbps are available
Similarly
VT2/VC-12 has 4 columns = 36 bytes = 2.304 Mbps
but 2 bytes are used for overhead
So actually only 34 bytes = 2.176 Mbps are available
LOP overhead
V5 consists of
BIP (2b)
REI (1b)
RFI (1b)
Signal label (3b) (uneq, async, bit-sync, byte-sync, test, AIS)
RDI (1b)
J2 is path trace
N2 is the network operator byte
may be used for LOP tandem connection monitoring (LO-TCM)
K4 is for LO VCAT and LO APS
SDH Containers
Tributary payloads are not placed directly into SDH
Payloads are placed (adapted) into containers
The containers are made into virtual containers (by adding POH)
Next, the pointer is used the pointer + VC is a TU or AU
Tributary Unit adapts a lower order VC to high order VC
Administrative Unit adapts higher order VC to SDH
TUs and AUs are grouped together until they are big enough
We finally get an Administrative Unit Group
To the AUG we add SOH to make the STM frame
Formally
C-n n = 11, 12, 2, 3, 4
VC-n = POH + C-n
TU-n = pointer + VC-n (n=11, 12, 2, 3)
AU-n = pointer + VC-n (n=3,4)
TUG = N * TU-n
AUG = N * AU-n
STM-N = SOH + AUG
Multiplexing
An AUG may contain a VC-4 with an E4
or it may contain 3 AU-3s each with a VC-3s with an E3
In the latter case, the AU pointer points to the AUG
and inside the AUG are 3 pointers to the AU-3s
J1
B3
C2
G1 H1 H1 H1 H2 H2 H2
H3 H3 H3
F2
H4
F3
K3
N1
More multiplexing
Similarly, we can hierarchically build complex structures
Lower rate STMs can be combined into higher rate STMs
AUGs can be combined into STMs
AUs can be combined into AUGs
TUGs can be combined into high order VCs
Lower rate TUs can be combined into TUGs
etc.
But only certain combinations are allowed by standards
AUG
STM-N
AUG
AUG
STM-0
AU-4
VC-4
*3
*3
AU-3
C-4
TUG-3
TU-3
E4 139.264 M
ATM 149.760M
VC-3
VC-3
C3
*7
E3 34.368 M
T3 44.736 M
ATM 48.384 M
*7
TU-2
VC-2
C2
TU-12
VC-12
C12
TU-11
VC-11
C11
TUG-2
T2 6.312 M
ATM 6.874M
*3
E1 2.048 M
ATM 2.144 M
*4
T1 1.544 M
ATM 1.6 M
STS-3 SPE
STS-3c
STS-1
E4 139.264 M
ATM 149.760M
E3 34.368 M
T3 44.736 M
ATM 48.384 M
STS-1 SPE
*7
VTG
VT6
VT6 SPE
VT-2
VT2 SPE
VT1.5
VT1.5 SPE
T2 6.312 M
ATM 6.874M
*3
pointer processing
E1 2.048 M
ATM 2.144 M
*4
T1 1.544 M
ATM 1.6 M
64*(270-9) = 16704
columns
J1
Protection
and
Rings
What is protection ?
SONET/SDH need to be highly reliable (five nines)
Down-time should be minimal (less than 50 msec)
So systems must repair themselves (no time for manual intervention)
Upon detection of a failure (dLOS, dLOF, high BER)
the network must reroute traffic (protection switching)
from working channel to protection channel
The Network Element that detects the failure (tail-end NE)
initiates the protection switching
The head-end NE must change forwarding or to send duplicate traffic
Protection switching is unidirectional
Protection switching may be revertive (automatically revert to working channel)
working channel
head-end NE
protection channel
tail-end NE
Y(J)S SONET Slide 69
head-end bridge
tail-end bridge
working channel
protection channel
signaling channel
Y(J)S SONET Slide 70
channel B
Y(J)S SONET Slide 71
May be at any layer (only OC-n level protects against fiber cuts)
working channel
extra traffic
protection channel
Y(J)S SONET Slide 72
working channels
protection channel
Y(J)S SONET Slide 73
A-B
B-C
B-A
A
A
C-B
B-A
C
Y(J)S SONET Slide 75
Unidirectional
Path switching
Two-fiber
BLSR
Bidirectional
Line switching
Four-fiber
UPSR
Working channel is in one direction
protection channel in the opposite direction
All traffic is added in both directions
decision as to which to use at drop point (no signaling)
Normally non-revertive, so effective two diversity paths
Good match for access networks
1 access resilient ring
less expensive than fiber pair per customer
Inefficient for core networks
no spatial reuse
every signal in every span
in both directions
node needs to continuously monitor
every tributary to be dropped
BLSR
Switch at line level less monitoring
When failure detected tail-end NE signals head-end NE
Works for unidirectional/bidirectional fiber cuts, and NE failures
Two-fiber version
half of OC-N capacity devoted to protection
only half capacity available for traffic
Four-fiber version
full redundant OC-N devoted to protection
twice as many NEs as compared to two-fiber
Example
recovery from unidirectional fiber cut
Y(J)S SONET Slide 78
VCAT
and
LCAS
Concatenation
Payloads that dont fit into standard VT/VC sizes can be accommodated
by concatenating of several VTs / VCs
For example, 10 Mbps doesnt fit into any VT or VC
so w/o concatenation we need to put it into an STS-1 (48.384 Mbps)
the remaining 38.384 Mbps can not be used
We would like to be able to divide the 10 Mbps among
7 VT1.5/VC-11 s = 7 * 1.600 = 11.20 Mbps or
5 VT2/VC-12 s = 5 * 2.176 = 10.88 Mbps
Concatenation (cont.)
There are 2 ways to concatenate X VTs or VCs:
9 columns of
section and
line overhead
3 columns of
path overhead
STS-3
270 columns
9 rows
STS-3c
0.576 = 149.760 Mbps
1 column of
path overhead
260 columns *
Virtual Concatenation
H4
VC-12-Xv
VC-2-Xv
Capacity (Mbps)
1.600, 3.200,
1.600X
in VC-3 X 28 C 44.800
2.176, 4.352,
2.176X
in VC-3 X 21 C 45.696
6.784, 13.568, ,
6.784X
in VC-3 X 7
in VC-4 X 64 C 102.400
in VC-4 X 63 C 137.088
C 47.448
in VC-4 X 21 C 142.464
Capacity (Mbps)
X 28 C 44.800
in STS-3c X 64 C 102.400
VT2-Xv
in STS-1
X 21 C 45.696
in STS-3c X 63 C 137.088
VT3-Xv
in STS-1
X 14 C 46.592
in STS-3c X 42 C 139.776
VT6-Xv
in STS-1
X 7 C 47.448
in STS-3c X 21 C 142.464
So we have many permissible rates
1.600, 2.176, 3.200, 3.328, 4.352, 4.800, 6.400, 6.528, 6.656, 6.784,
Y(J)S SONET Slide 86
Efficiency comparison
rate
w/o VCAT
efficiency
with VCAT
efficiency
10
STS-1
21%
VT2-5v
92%
VC-12-5v
100
STS-3c
67%
VC-4
1000
STS-48c
VC-4-16c
STS-1-2v
100%
VC-3-2v
42%
STS-3c-7v
95%
VC-4-7v
PDH VCAT
VCAT
overhead
octet
1st
frame
of
4 E1s
TS0
time
Y(J)S SONET Slide 88
VCAT
overhead
octet
frames
of an
E1
TS0
Delay compensation
802.1ad Ethernet link aggregation cheats
each identifiable flow is restricted to one link
doesnt work if single high-BW flow
VCAT is completely general
works even with a single flow
VCG members may travel over completely separate paths
so the VCAT mechanism must compensate for differential delay
Requirement for over second compensation
Must compensate to the bit level
but since frames have Frame Alignment Signal
the VCAT mechanism only needs to identify individual frames
VCAT buffering
(4096*0.125m=512m)
For HOP SDH VCAT and PDH VCAT (H4 byte or PDH VCAT overhead)
The basic multiframe is 16 frames
So we need 256 multiframes in a superframe (256*16=4096)
The MultiFrame Indicator is divided into two parts:
POH
H4 format
reserved fields
CTRL
GID
MST bits
RS-ACK
16 frame multiframe
reserved fields
MFI1
GID
Initial state:
Handling
Packet
Data
PoS architecture
IP
PPP
HDLC
SONET/SDH
PoS Details
IP packet is encapsulated in PPP
default MTU is 1500 bytes
up to 64,000 bytes allowed if negotiated by PPP
FCS is generated and appended
PPP in HDLC framing with byte stuffing
43 bit scrambler is run over the SPE
byte stream is placed octet-aligned in SPE
(e.g. 149.760 Mbps of STM-1)
HDLC frames may cross SPE boundaries
Y(J)S SONET Slide
POS problems
PoS is BW efficient
but POS has its disadvantages
BW must be predetermined
LAPS
In 2001 ITU-T introduced protocols for transporting packets over SDH
GFP architecture
A new approach, not based on HDLC
Defined in ITU-T G.7041 (also numbered Y.1303)
originally developed in T1X1 to fix ATM limitations
(like ATM) uses HEC protected frames instead of HDLC
Ethernet
IP
HDLC
other
OTN
other
core
header
cHEC (2B)
payload header
(4-64B)
X16 + X12 + X5 + 1
PLI (2B)
payload
area
payload
optional payload
FCS (4B)
type (2B)
tHEC (2B)
extension header
(0-60B)
eHEC (2B)
GFP modes
GFP-F - frame mapped GFP
Good for PDU-based protocols (Ethernet, IP, MPLS)
or HDLC-based ones (PPP)
Client PDU is placed in GFP payload field
GFP-T transparent GFP
Good for protocols that exploit physical layer capabilities
In particular
8B/10B line code
used in fiber channel, GbE, FICON, ESCON, DVB, etc
Were we to use GFP-F would lose control info, GFP-T is transparent to these codes
Also, GFP-T neednt wait for entire PDU to be received (adding delay!)